In this thesis, we present a novel real-time computer vision-based system for facilitating interactions between a single human and a multi-robot system: a user first selects an individual robot from a group of robots, by simply looking at it, and then commands the selected robot with a motion-based gesture. We describe a novel multi-robot system that demonstrates the feasibility of using face contact and motion-based gestures as two non-verbal communication channels for human-robot interaction. Robots first perform face detection using a well-known face detector. The resulting "score" of the detected face is used in a distributed leader election algorithm to estimate which robot the user is looking at. The selected robot then derives a set of motion features, based on blurred optical flow, which is extracted from a user-centric region. These motion cues are then used to discriminate between gestures (robot commands) using an efficient learned classifier.
Copyright is held by the author.
The author has not granted permission for the file to be printed nor for the text to be copied and pasted. If you would like a printable copy of this thesis, please contact firstname.lastname@example.org.
Member of collection