Skip to main content

Emotional remapping of music to facial animation

Resource type
Date created
2006
Authors/Contributors
Author: Arya, Ali
Abstract
We propose a method to extract the emotional data from a piece
of music and then use that data via a remapping algorithm to
automatically animate an emotional 3D face sequence. The
method is based on studies of the emotional aspect of music and
our parametric-based behavioral head model for face animation.
We address the issue of affective communication remapping in
general, i.e. translation of affective content (eg. emotions, and
mood) from one communication form to another. We report on
the results of our MusicFace system, which use these techniques
to automatically create emotional facial animations from multiinstrument
polyphonic music scores in MIDI format and a
remapping rule set. ? ACM, 2006. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive
version was published in Proceedings of the 2006 ACM SIGGRAPH symposium on Videogames, 143-149. Boston, Massachusetts: ACM. doi:10.1145/1183316.1183337
Document
Copyright statement
Copyright is held by the author(s).
Language
English
Member of collection
Download file Size
dipaola-emotionalremappingrev.pdf 448.45 KB

Views & downloads - as of June 2023

Views: 0
Downloads: 0