Skip to main content

A Multi-Modal Person Recognition System for Social Robots

Resource type
Date created
2018-03-06
Authors/Contributors
Abstract
The paper presents a solution to the problem of person recognition by social robots via a novel brain-inspired multi-modal perceptual system. The system employs spiking neural network to integrate face, body features, and voice data to recognize a person in various social human-robot interaction scenarios. We suggest that, by and large, most reported multi-biometric person recognition algorithms require active participation by the subject and as such are not appropriate for social human-robot interactions. However, the proposed algorithm relaxes this constraint. As there are no public datasets for multimodal systems, we designed a hybrid dataset by integration of the ubiquitous FERET, RGB-D, and TIDIGITS datasets for face recognition, person recognition, and speaker recognition, respectively. The combined dataset facilitates association of facial features, body shape, and speech signature for multimodal person recognition in social settings. This multimodal dataset is employed for testing the algorithm. We assess the performance of the algorithm and discuss its merits against related methods. Within the context of the social robotics, the results suggest the superiority of the proposed method over other reported person recognition algorithms.
Document
Published as
Al-Qaderi, M., & Rad, A. (2018). A Multi-Modal Person Recognition System for Social Robots. Applied Sciences, 8(3), 387. DOI: 10.3390/app8030387.
Publication title
Applied Sciences
Document title
A Multi-Modal Person Recognition System for Social Robots
Date
2018
Volume
8
Issue
3
Publisher DOI
10.3390/app8030387
Copyright statement
Copyright is held by the author(s).
Scholarly level
Peer reviewed?
Yes
Language
English
Member of collection
Download file Size
applsci-08-00387.pdf 4.84 MB

Views & downloads - as of June 2023

Views: 0
Downloads: 0