Our research focuses on the physical exchange phase in the robot-to-human object handover task. We present a novel torque and vision sensor-based release controller for the physical exchange phase. Our system is implemented on a 7-DOF manipulator Kinova arm with joint torque sensors and equipped with an eye-in-hand camera and a 3-finger mechanical Schunk Dextrous Hand. The system performs a fully autonomous and robust object handover to a human receiver in real-time. Our control algorithm relies on two complementary sensor modalities: joint torque sensors on the arm and an eye-in-hand RGB-D camera for vision sensor feedback. Our approach is entirely implicit, i.e., there is no explicit communication between the robot and the human receiver. Information obtained via the aforementioned sensor modalities is used as inputs to their respective deep neural networks. While the torque sensor network detects the human receiver's action, such as: pull, hold, or bump, the vision sensor network detects if the receiver's fingers have wrapped around the object. Networks' outputs are then fused, based on which a decision is made to either release the object or not. Our release controller is then compared to an existing handover controller, which performs handover using only the force sensor on the wrist. Our approach overcomes substantive challenges in sensor feedback synchronization, object and human hand detection, and achieves robust robot-to-human handover with 98% accuracy in our real experiments with human receivers.
Copyright is held by the author(s).
This thesis may be printed or downloaded for non-commercial research and scholarly purposes.
Member of collection