As robots become more involved in our everyday lives, we may find it useful to design new methods to interact with them. In this thesis, we design two applications to facilitate human-robot interaction. The first application is an autonomous mobile robot that follows a walking user while staying ahead of them. Despite several useful applications for autonomous push-carts, this problem has received much less attention than the easier problem of following from behind. In contrast to previous work, we use multi-modal person detection and a human-motion model that considers obstacles to predict the future path of the user. We implement the system with a modular architecture of an obstacle mapper, a human tracker, a human motion model, a robot motion planner and a robot motion controller. We report on the performance of the robot in real world experiments. We believe that approaches to this largely overlooked problem could be useful in real industrial, domestic and entertainment applications in the near future. The second application is a novel system for describing the navigational cues ahead of the robot. We train a Convolutional Neural Network (CNN) architecture on 2D LiDAR data and occupancy grid maps to detect navigational objects during robot movement through an indoor environment. These navigational objects include closed-rooms (room with closed doors), open-rooms (room with open doors) and intersections. On top of this network, our system uses a tracking module that improves the detection accuracy of the system by clustering and recording each detection of the model. This tracking module also enables us to describe the robot navigational cues. We evaluate the system in both simulation and the real world. We compare the combination of 2D LiDAR data and occupancy grid maps and using each of them alone.
Copyright is held by the author.
This thesis may be printed or downloaded for non-commercial research and scholarly purposes.
Supervisor or Senior Supervisor
Thesis advisor: Vaughan, Richard
Member of collection