SIAT Faculty Publications

Receive updates for this collection

Face modeling and animation language for MPEG-4 XMT framework

Date created: 
2007-10
Abstract: 

This paper proposes FML, an XML-based face modeling and animation language. FML provides a structured content description method for multimedia presentations based on face animation. The language can be used as direct input to compatible players, or be compiled within MPEG-4 XMT framework to create MPEG-4 presentations. The language allows parallel and sequential action description, decision-making and dynamic event-based scenarios, model configuration, and behavioral template definition. Facial actions include talking, expressions, head movements, and low-level MPEG-4 FAPs. The ShowFace and iFACE animation frameworks are also reviewed as example FML-based animation systems.

Document type: 
Article

Face, portrait, mask: The virtuality of the synthetic face

Author: 
Date created: 
2004
Abstract: 

With new technological artistic tools that allow us to author 3D computer generated faces that can be as real or as abstract or as iconified as we choose, what aesthetic and cultural communication language do we elicit? Is it the historically rich language of the fine art portrait ? the abstracted artifact of the human face? What happens when this portrait animates, conveying lifelike human facial emotion ? does it cease to be a portrait and instead moves into the realm of embodied face ? when it begins to emote and feel and possibly react to the viewer? Is it then more in the language of the animated character, or as we make it photo-realistic, the language of the video actor with deep dramatic back-story or simply as real as a person on the other side of the screen? A viewer can not be rude to a portrait but can feel that they are being rude to an interactive character in an art installation. When does it become not an embodied face nor portrait but a mask ? the icon that speaks of face but is never embodied? Masks also have a deep cultural, historic and ethnic language far different than that of human faces or art portraits. More eastern compared to the western portrait. Iconized faces (i.e. the smiley face or the emoticon face) takes the mask full through to the western modern world of technology.

Document type: 
Other

FaceSpace: A facial spatial domain toolkit

Author: 
Date created: 
2002
Abstract: 

We will describe a visual development system for
exploring face space, both in terms of facial types and
animated expressions. Imagine an n-dimensional space
describing every humanoid face, where each dimension
represents a different facial characteristic. Within this
continuous space, it would be possible to traverse a path
from any face to any other face, morphing through faces
along that path. It is also possible to combine elements of
this space to create an expressive, emotive, talking 3D
synthetic face of any given facial type.
This development toolkit called FaceSpace is based
on a hierarchical parametric approach to facial animation
and creation. We present our early results on exploring a
face space and describe our preliminary investigation of
the perceptual and cultural relationships between different
facial types; as well as creating an additive language of
hierarchical expressions, emotions and lip-sync sequences
by using combined elements of a facial domain.

Document type: 
Conference presentation

Facial actions as visual cues for personality

Date created: 
2006-07
Abstract: 

What visual cues do human viewers use to assign personality characteristics to animated characters?
While most facial animation systems associate facial actions to limited emotional states or speech content,
the present paper explores the above question by relating the perception of personality to a wide variety of
facial actions (e.g., head tilting/turning, and eyebrow raising) and emotional expressions (e.g., smiles and
frowns). Animated characters exhibiting these actions and expressions were presented to human viewers in
brief videos. Human viewers rated the personalities of these characters using a well-standardized adjective
rating system borrowed from the psychological literature. These personality descriptors are organized in a
multidimensional space that is based on the orthogonal dimensions of Desire for Affiliation and Displays of
Social Dominance. The main result of the personality rating data was that human viewers associated
individual facial actions and emotional expressions with specific personality characteristics very reliably. In
particular, dynamic facial actions such as head tilting and gaze aversion tended to spread ratings along the
Dominance dimension, whereas facial expressions of contempt and smiling tended to spread ratings along
the Affiliation dimension. Furthermore, increasing the frequency and intensity of the head actions increased
the perceived Social Dominance of the characters. We interpret these results as pointing to a reliable link
between animated facial actions/expressions and the personality attributions they evoke in human viewers.
The paper shows how these findings are used in our facial animation system to create perceptually valid
personality profiles based on Dominance and Affiliation as two parameters that control the facial actions of
autonomous animated characters.

Document type: 
Article

Painterly rendered portraits from photographs using a knowledge-based approach

Author: 
Date created: 
2007-01
Abstract: 

Portrait artists using oils, acrylics or pastels use a specific but open human vision methodology to create a painterly portrait
of a live sitter. When they must use a photograph as source, artists augment their process, since photographs have: different
focusing - everything is in focus or focused in vertical planes; value clumping - the camera darkens the shadows and lightens
the bright areas; as well as color and perspective distortion. In general, artistic methodology attempts the following: from the
photograph, the painting must 'simplify, compose and leave out what?s irrelevant, emphasizing what?s important'. While
seemingly a qualitative goal, artists use known techniques such as relying on source tone over color to indirect into a
semantic color temperature model, use brush and tonal "sharpness" to create a center of interest, lost and found edges to
move the viewers gaze through the image towards the center of interest as well as other techniques to filter and emphasize.
Our work attempts to create a knowledge domain of the portrait painter process and incorporate this knowledge into a multispace
parameterized system that can create an array of NPR painterly rendering output by analyzing the photographic-based
input which informs the semantic knowledge rules.

Document type: 
Article

Perceptually Valid Facial Expressions for Character-Based Applications

Date created: 
2009-01-14
Abstract: 

This paper addresses the problem of creating facial expression of mixed emotions in a perceptually valid way. The research has been done in the context of a “game-like” health and education applications aimed at studying social competency and facial expression awareness in autistic children as well as native language learning, but the results can be applied to many other applications such as games with need for dynamic facial expressions or tools for automating the creation of facial animations. Most existing methods for creating facial expressions of mixed emotions use operations like averaging to create the combined effect of two universal emotions. Such methods may be mathematically justifiable but are not necessarily valid from a perceptual point of view. The research reported here starts by user experiments aiming at understanding how people combine facial actions to express mixed emotions, and how the viewers perceive a set of facial actions in terms of underlying emotions. Using the results of these experiments and a three-dimensional emotion model, we associate facial actions to dimensions and regions in the emotion space, and create a facial expression based on the location of the mixed emotion in the three-dimensional space. We call these regionalized facial actions “facial expression units.”

Document type: 
Article

Experiencing Belugas: Action selection for an interactive aquarium exhibit

Date created: 
2007
Abstract: 

This paper presents a case study of an action selection
system designed with adaptive techniques to create a
virtual beluga aquarium exhibit. The beluga interactive
exhibit uses a realistic 3D simulation system that
allows the virtual belugas, in a natural pod context, to
learn and alter their behavior based on contextual
visitor interaction. Ethogram information on beluga
behavior was incorporated into the simulation, which
uses physically based systems for natural whale
locomotion and water, artificial intelligence systems
including modified neural networks and a reactive
hierarchical action selection mechanism to simulate
real-time natural individual beluga and group behavior.
The beluga’s behavioral system consists of two layers:
a low-level navigation system and a high-level reaction
hierarchical action selection system. The system is
designed to be run on consumer level hardware while
maintaining real-time speeds.

Document type: 
Preprint

Face as multimedia object

Date created: 
2004
Abstract: 

This paper proposes the Face Multimedia Object (FMO),
and iFACE as a framework for implementing the face
object within multimedia systems. FMO encapsulates all
the functionality and data required for face animation.
iFACE implements FMO and provides necessary
interfaces for a variety of applications in order to access
FMO services.

Document type: 
Conference presentation

Authoring the Intimate Self: Identity, Expression and Role-playing within a Pioneering Virtual Community

Peer reviewed: 
Yes, item is peer reviewed.
Date created: 
2008
Abstract: 

We examine Traveler, a social-based 3D online virtual community with over ten years of continuous community use, as a case study. Traveler is a client-server application allowing real-time synchronous communication between individuals over the Internet. The Traveler client interface presents the user with a shared, user created, virtual 3D world, in which participants are represented by avatars. The primary mode of communication is through multi-point, full duplex voice, managed by the server. This paper reflects on the initial design goals of the developers in the mid 1990s to emulate natural social paradigms and, more recently, reports on how the online community uses distance attended multi-point voice and open-ended 3D space construction to express themselves both on a personal level and collaborative level to facilitate a tight socially based community. This paper situates the historical importance of Traveler within the framework of contemporary virtual worlds and provides insights into the ways that this software platform might influence next-generation virtual communities.

Document type: 
Article

Exploring a Parameterized Portrait Painting Space

Author: 
Peer reviewed: 
Yes, item is peer reviewed.
Date created: 
2009
Abstract: 

We overview our interdisciplinary work building parameterized knowledge domains and their authoring tools that allow for expression systems which move through a space of painterly portraiture. With new computational systems it is possible to conceptually dance, compose and paint in higher level conceptual spaces. We are interested in building art systems that support exploring these spaces and in particular report on our software-based artistic toolkit and resulting experiments using parameter spaces in face based new media portraiture. This system allows us to parameterize the open cognitive and vision-based methodology that human artists have intuitively evolved over centuries into a domain toolkit to explore aesthetic realizations and interdisciplinary questions about the act of portrait painting as well as the general creative process. These experiments and questions can be explored by traditional and new media artists, art historians, cognitive scientists and other scholars.

Document type: 
Article