Resource type
Date created
2009
Authors/Contributors
Abstract
This paper addresses the problem of creating facial expression of mixed emotions in a perceptually valid way. The research has been done in the context of a “game-like” health and education applications aimed at studying social competency and facial expression awareness in autistic children as well as native language learning, but the results can be applied to many other applications such as games with need for dynamic facial expressions or tools for automating the creation of facial animations. Most existing methods for creating facial expressions of mixed emotions use operations like averaging to create the combined effect of two universal emotions. Such methods may be mathematically justifiable but are not necessarily valid from a perceptual point of view. The research reported here starts by user experiments aiming at understanding how people combine facial actions to express mixed emotions, and how the viewers perceive a set of facial actions in terms of underlying emotions. Using the results of these experiments and a three-dimensional emotion model, we associate facial actions to dimensions and regions in the emotion space, and create a facial expression based on the location of the mixed emotion in the three-dimensional space. We call these regionalized facial actions “facial expression units.”
Document
Published as
International Journal of Computer Games Technology Volume 2009 (2009), Article ID 462315, 13 pages http://dx.doi.org/10.1155/2009/462315
Publication details
Publication title
International Journal of Computer Games Technology Volume 2009 (2009), Article ID 462315, 13 pages http://dx.doi.org/10.1155/2009/462315
Document title
Perceptually Valid Facial Expressions for Character-Based Applications
Date
2009
Volume
2009
Publisher DOI
10.1155/2009/462315
Copyright statement
Copyright is held by the author(s).
Scholarly level
Peer reviewed?
Yes
Language
English
Member of collection
Download file | Size |
---|---|
462315.pdf | 6.5 MB |