Skip to main content

Multimodal Interfaces for Human-Robot Interaction

Resource type
Thesis type
(Thesis) Ph.D.
Date created
2016-12-23
Authors/Contributors
Abstract
Robots are becoming more popular in domestic human environments, from service applica- tions to entertainment and education, where they share the workspace and interact directly with the general public in their everyday life. One long-term goal of human-robot inter- action (HRI) research is to have robots work with and around people, taking instructions via simple, intuitive interfaces. For a successful, natural interaction robots are expected to be observant of the human present, recognize what they are doing and act appropriately to their attention-drawing behaviors such as gaze, body posture or gestures. We call such a system by which a robot can take notice of someone or something and consider it as interesting or relevant attention system. These systems enable robots to shift their focus of attention to a particular part of the information that is relevant and meaningful in a given situation based on the motivational and behavioral state of the robot. This awareness comes from interpreting the exchanged information between humans and robots. The exchange of information through a combination of different modalities is anticipated to be of most ben- efit. Multimodal interfaces can be used to take advantage of the existing strengths of each composite modality and overcome individual weaknesses. Also, it has been argued [1] that multimodal interfaces facilitate a more natural communication as by employing integrated systems users will be less concerned about how to communicate the intended commands or which modality to use, and therefore be free to focus on the task and goals at hand. This PhD thesis presents our contributions made in designing and implementing multimodal, sensor-mediated attention systems that enable users to interact directly with physically col- located robots using natural and intuitive communication methods. We focus on scenarios when there are multiple people or multiple robots in the environment. First, we introduce two multimodal human multi-robot interaction systems for selecting and commanding an individual or a group of robots from a population. In this context, we study how spatial configuration of user and robots may affect the efficiency of these interfaces in real-world settings. Next, we present a probabilistic approach for identifying attention-drawing signals from an interested party and controlling a mobile robot’s attention toward the most promis- ing interaction partner among a group of people. Finally, we report on a user study designed to assess the performance and usability of this proposed system for finding HRI partners in a crowd when used by the non-robotics experts and compare it to manual control.
Document
Identifier
etd9964
Copyright statement
Copyright is held by the author.
Permissions
This thesis may be printed or downloaded for non-commercial research and scholarly purposes.
Scholarly level
Supervisor or Senior Supervisor
Thesis advisor: Vaughan, Richard
Thesis advisor: Mori, Greg
Member of collection
Download file Size
etd9964_SPourmehr.pdf 21.62 MB

Views & downloads - as of June 2023

Views: 0
Downloads: 0