SIAT Faculty Publications

Receive updates for this collection

Face modeling and animation language for MPEG-4 XMT framework

Date created: 
2007-10
Abstract: 

This paper proposes FML, an XML-based face modeling and animation language. FML provides a structured content description method for multimedia presentations based on face animation. The language can be used as direct input to compatible players, or be compiled within MPEG-4 XMT framework to create MPEG-4 presentations. The language allows parallel and sequential action description, decision-making and dynamic event-based scenarios, model configuration, and behavioral template definition. Facial actions include talking, expressions, head movements, and low-level MPEG-4 FAPs. The ShowFace and iFACE animation frameworks are also reviewed as example FML-based animation systems.

Document type: 
Article

Perceptually Valid Facial Expressions for Character-Based Applications

Date created: 
2009-01-14
Abstract: 

This paper addresses the problem of creating facial expression of mixed emotions in a perceptually valid way. The research has been done in the context of a “game-like” health and education applications aimed at studying social competency and facial expression awareness in autistic children as well as native language learning, but the results can be applied to many other applications such as games with need for dynamic facial expressions or tools for automating the creation of facial animations. Most existing methods for creating facial expressions of mixed emotions use operations like averaging to create the combined effect of two universal emotions. Such methods may be mathematically justifiable but are not necessarily valid from a perceptual point of view. The research reported here starts by user experiments aiming at understanding how people combine facial actions to express mixed emotions, and how the viewers perceive a set of facial actions in terms of underlying emotions. Using the results of these experiments and a three-dimensional emotion model, we associate facial actions to dimensions and regions in the emotion space, and create a facial expression based on the location of the mixed emotion in the three-dimensional space. We call these regionalized facial actions “facial expression units.”

Document type: 
Article

Experiencing Belugas: Action selection for an interactive aquarium exhibit

Date created: 
2007
Abstract: 

This paper presents a case study of an action selection
system designed with adaptive techniques to create a
virtual beluga aquarium exhibit. The beluga interactive
exhibit uses a realistic 3D simulation system that
allows the virtual belugas, in a natural pod context, to
learn and alter their behavior based on contextual
visitor interaction. Ethogram information on beluga
behavior was incorporated into the simulation, which
uses physically based systems for natural whale
locomotion and water, artificial intelligence systems
including modified neural networks and a reactive
hierarchical action selection mechanism to simulate
real-time natural individual beluga and group behavior.
The beluga’s behavioral system consists of two layers:
a low-level navigation system and a high-level reaction
hierarchical action selection system. The system is
designed to be run on consumer level hardware while
maintaining real-time speeds.

Document type: 
Preprint

Face as multimedia object

Date created: 
2004
Abstract: 

This paper proposes the Face Multimedia Object (FMO),
and iFACE as a framework for implementing the face
object within multimedia systems. FMO encapsulates all
the functionality and data required for face animation.
iFACE implements FMO and provides necessary
interfaces for a variety of applications in order to access
FMO services.

Document type: 
Conference presentation

Authoring the Intimate Self: Identity, Expression and Role-playing within a Pioneering Virtual Community

Peer reviewed: 
Yes, item is peer reviewed.
Date created: 
2008
Abstract: 

We examine Traveler, a social-based 3D online virtual community with over ten years of continuous community use, as a case study. Traveler is a client-server application allowing real-time synchronous communication between individuals over the Internet. The Traveler client interface presents the user with a shared, user created, virtual 3D world, in which participants are represented by avatars. The primary mode of communication is through multi-point, full duplex voice, managed by the server. This paper reflects on the initial design goals of the developers in the mid 1990s to emulate natural social paradigms and, more recently, reports on how the online community uses distance attended multi-point voice and open-ended 3D space construction to express themselves both on a personal level and collaborative level to facilitate a tight socially based community. This paper situates the historical importance of Traveler within the framework of contemporary virtual worlds and provides insights into the ways that this software platform might influence next-generation virtual communities.

Document type: 
Article

Exploring a Parameterized Portrait Painting Space

Author: 
Peer reviewed: 
Yes, item is peer reviewed.
Date created: 
2009
Abstract: 

We overview our interdisciplinary work building parameterized knowledge domains and their authoring tools that allow for expression systems which move through a space of painterly portraiture. With new computational systems it is possible to conceptually dance, compose and paint in higher level conceptual spaces. We are interested in building art systems that support exploring these spaces and in particular report on our software-based artistic toolkit and resulting experiments using parameter spaces in face based new media portraiture. This system allows us to parameterize the open cognitive and vision-based methodology that human artists have intuitively evolved over centuries into a domain toolkit to explore aesthetic realizations and interdisciplinary questions about the act of portrait painting as well as the general creative process. These experiments and questions can be explored by traditional and new media artists, art historians, cognitive scientists and other scholars.

Document type: 
Article

Incorporating Characteristics of Human Creativity into an Evolutionary Art Algorithm

Peer reviewed: 
Yes, item is peer reviewed.
Date created: 
2009
Abstract: 

A perceived limitation of evolutionary art and design algorithms is that they rely on human intervention; the artist selects the most aesthetically pleasing variants of one generation to produce the next. This paper discusses how computer generated art and design can become more creatively human-like with respect to both process and outcome. As an example of a step in this direction, we present an algorithm that overcomes the above limitation by employing an automatic fitness function. The goal is to evolve abstract portraits of Darwin, using our 2nd generation fitness function which rewards genomes that not just produce a likeness of Darwin but exhibit certain strategies characteristic of human artists. We note that in human creativity, change is less choosing amongst randomly generated variants and more capitalizing on the associative structure of a conceptual network to hone in on a vision. We discuss how to achieve this fluidity algorithmically.

Document type: 
Article

Perceptually Valid Facial Expressions for Character-Based Applications

Peer reviewed: 
Yes, item is peer reviewed.
Date created: 
2009-01-14
Abstract: 

This paper addresses the problem of creating facial expression of mixed emotions in a perceptually valid way. The research has been done in the context of a “game-like” health and education applications aimed at studying social competency and facial expression awareness in autistic children as well as native language learning, but the results can be applied to many other applications such as games with need for dynamic facial expressions or tools for automating the creation of facial animations. Most existing methods for creating facial expressions of mixed emotions use operations like averaging to create the combined effect of two universal emotions. Such methods may be mathematically justifiable but are not necessarily valid from a perceptual point of view. The research reported here starts by user experiments aiming at understanding how people combine facial actions to express mixed emotions, and how the viewers perceive a set of facial actions in terms of underlying emotions. Using the results of these experiments and a three-dimensional emotion model, we associate facial actions to dimensions and regions in the emotion space, and create a facial expression based on the location of the mixed emotion in the three-dimensional space. We call these regionalized facial actions “facial expression units.”

Document type: 
Article

Rembrandt's Textural Agency: A Shared Perspective in Visual Art and Science

Peer reviewed: 
Yes, item is peer reviewed.
Date created: 
2010-04
Abstract: 

This interdisciplinary paper hypothesizes that Rembrandt developed new painterly techniques — novel to the early modern period — in order to engage and direct the gaze of the observer. Though these methods were not based on scientific evidence at the time, we show that they nonetheless are consistent with a contemporary understanding of human vision. Here we propose that artists in the late ‘early modern’ period developed the technique of textural agency — involving selective variation in image detail — to guide the observer’s eye and thereby influence the viewing experience. The paper begins by establishing the well-known use of textural agency among modern portrait artists, before considering the possibility that Rembrandt developed these techniques in his late portraits in reaction to his Italian contemporaries. A final section brings the argument full circle, with the presentation of laboratory evidence that Rembrandt’s techniques indeed guide the modern viewer’s eye in the way we propose.

Document type: 
Article

3D natural emulation design approach to virtual communities

Author: 
Peer reviewed: 
No, item is not peer reviewed.
Date created: 
2010-05-31
Abstract: 

The design goal for OnLive’s Internet-based Virtual Community system was to develop avatars and virtual communities where the participants sense a tele-presence – that they are really there in the virtual space with other people. This collective sense of "being-there" does not happen over the phone or with teleconferencing; it is a new and emerging phenomenon, unique to 3D virtual communities. While this group presence paradigm is a simple idea, the design and technical issues needed to begin to achieve this on internet-based, consumer PC platforms are complex. This design approach relies heavily on the following immersion-based techniques: · 3D distanced attenuated voice and sound with stereo "hearing" · a 3D navigation scheme that strives to be as comfortable as walking around · an immersive first person user interface with a human vision camera angle · individualized 3D head avatars that breathe, have emotions and lip sync · 3D space design that is geared toward human social interaction.

Document type: 
Other