Skip to main content

Selecting and commanding individual robots in a multi-robot system

Resource type
Thesis type
(Thesis) M.Sc.
Date created
2010
Authors/Contributors
Abstract
In this thesis, we present a novel real-time computer vision-based system for facilitating interactions between a single human and a multi-robot system: a user first selects an individual robot from a group of robots, by simply looking at it, and then commands the selected robot with a motion-based gesture. We describe a novel multi-robot system that demonstrates the feasibility of using face contact and motion-based gestures as two non-verbal communication channels for human-robot interaction. Robots first perform face detection using a well-known face detector. The resulting "score" of the detected face is used in a distributed leader election algorithm to estimate which robot the user is looking at. The selected robot then derives a set of motion features, based on blurred optical flow, which is extracted from a user-centric region. These motion cues are then used to discriminate between gestures (robot commands) using an efficient learned classifier.
Document
Copyright statement
Copyright is held by the author.
Permissions
The author has not granted permission for the file to be printed nor for the text to be copied and pasted. If you would like a printable copy of this thesis, please contact summit-permissions@sfu.ca.
Scholarly level
Language
English
Member of collection
Download file Size
etd5891.pdf 5.86 MB

Views & downloads - as of June 2023

Views: 0
Downloads: 0