SIAT Faculty Publications

Receive updates for this collection

Socially expressive communication agents: A face-centric approach

Date created: 

Interactive Face Animation - Comprehensive Environment (iFACE) is a general purpose
software framework that encapsulates the functionality of “face multimedia object”.
iFACE exposes programming interfaces and provides authoring and scripting tools to design a
face object, define its behaviors, and animate it through static or interactive situations. The
framework is based on four parameterized spaces of Geometry, Mood, Personality, and
Knowledge that together form the appearance and behavior of the face object. iFACE
capabilities are demonstrated within the context of some artistic and educational projects .

Document type: 
Conference presentation

Designing an adaptive multimedia interactive to support shared learning experiences

Date created: 

With the aid of new technologies, integrated design approaches
are becoming increasingly incorporated into exhibit design in
museums, aquaria and science centres. These settings share many
similar design constraints that need to be addressed when
designing multimedia interactives as exhibits. The use of adaptive
systems and techniques can overcome many of the constraints
inherent in these environments as well as enhance the educational
content they incorporate. Our main design goal was to facilitate a
process to create user centric, collaborative and reflective learning
spaces around the smart multimedia interactives. We were
interested in encouraging deeper exploration of the content than
what is typically possible through wall signage, video display or a
supplemental web page. We discuss techniques to bring adaptive
systems into public informal learning settings, and validate these
techniques in a major aquarium with a beluga simulation
interactive. The virtual belugas, in a natural pod context, learn and
alter their behavior based on contextual visitor interaction. Data
from researchers, aquarium staff and visitors was incorporated
into the evolving interactive, which uses physically based systems
for natural whale locomotion and water, artificial intelligence
systems to simulation natural behavior, all of which respond to
user input. The interactive allows visitors to engage in educational
"what-if" scenarios of wild beluga emergent behavior using a
shared tangible interface controlling a large screen display. Copyright ACM, 2006. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive
version was published in In ACM SIGGRAPH 2006 Educators program (p. 14). Boston, Massachusetts: ACM. doi:10.1145/1179295.1179310

Document type: 
Conference presentation

Emotional remapping of music to facial animation

Date created: 

We propose a method to extract the emotional data from a piece
of music and then use that data via a remapping algorithm to
automatically animate an emotional 3D face sequence. The
method is based on studies of the emotional aspect of music and
our parametric-based behavioral head model for face animation.
We address the issue of affective communication remapping in
general, i.e. translation of affective content (eg. emotions, and
mood) from one communication form to another. We report on
the results of our MusicFace system, which use these techniques
to automatically create emotional facial animations from multiinstrument
polyphonic music scores in MIDI format and a
remapping rule set. ? ACM, 2006. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive
version was published in Proceedings of the 2006 ACM SIGGRAPH symposium on Videogames, 143-149. Boston, Massachusetts: ACM. doi:10.1145/1183316.1183337

Document type: 
Conference presentation

Multispace behavioral model for face-based affective social agents

Date created: 

This paper describes a behavioral model for affective social agents based on three independent but interacting parameter spaces:
knowledge, personality, andmood. These spaces control a lower-level geometry space that provides parameters at the facial feature
level. Personality and mood use findings in behavioral psychology to relate the perception of personality types and emotional
states to the facial actions and expressions through two-dimensional models for personality and emotion. Knowledge encapsulates
the tasks to be performed and the decision-making process using a specially designed XML-based language. While the geometry
space provides an MPEG-4 compatible set of parameters for low-level control, the behavioral extensions available through the
triple spaces provide flexible means of designing complicated personality types, facial expression, and dynamic interactive scenarios.

Document type: 

Ratava's line: Emergent learning and design using collaborative virtual worlds

Date created: 

Ratava's Line is an online, 3D virtual world fashion and interactive
narrative project created collaboratively by students at both the
Fashion Institute of Technology (FIT) in New York City and at
Interactive Arts at Simon Fraser University (SFU) in Vancouver,
Canada, using emergent, collaborative 2D and 3D systems. This
distance learning project, developed over two months and
culminating in an online event in multiple, remote locations,
integrated three key design elements: the translation of original 2D
fashion designs from FIT students into 3D avatar space; exhibits of
artwork of student and professional artists from New York City and
Vancouver in virtual galleries; and creation of an interactive
narrative "fashion cyber-mystery" for online users to participate in
and solve in a culminating, cyber-physical event. The overall
project goal was to explore how online collaboration systems and
virtual environments can be used practically for distance learning,
fashion and virtual worlds design, development of new marketing
tools including virtual portfolios, and creation of cross cultural
online/physical events. The result of this process was an
interdisciplinary, cross-institutional, international effort in
collaborative design in virtual environments, and a successful
exercise in emergent, collaborative distance learning. © ACM, 2004. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in International Conference on Computer Graphics and Interactive Techniques, page 25. (2004).

Document type: 
Conference presentation

A social metaphor-based 3D virtual environment

Date created: 

Our design goal for OnLive Traveler was to develop a virtual
community system that emulates natural social paradigms,
allowing the participants to sense a tele-presence, the subjective
sensation that remote users are actually co-located within a
virtual space. Once this level of immersive "sense of presence"
and engagement is achieved, we believe an enhanced level of
socialization, learning, and communication are achievable.
OnLive Traveler is a client-server application allowing realtime
synchronous communication between individuals over the
Internet. The Traveler client interface presents the user with a
shared virtual 3D world, in which participants are represented by
avatars. The primary mode of communication is through multipoint,
full duplex voice, managed by the server.
We examine a number of very specific design and
implementation decisions that were made to achieve this goal
within platform constraints. We also will detail some observed
results gleaned from the virtual community and virtual learning
user-base, which has been using Traveler for several years.
© ACM, 2003. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive
version was published in International Conference on Computer Graphics and Interactive Techniques. Pages 1-2. (2003)

Document type: 
Conference presentation

Rembrandt's textural agency: A shared perspective in visual art and science

Date created: 

This interdisciplinary paper hypothesizes that Rembrandt developed new painterly techniques —
novel to the early modern period — in order to engage and direct the gaze of the observer.
Though these methods were not based on scientific evidence at the time, we show that they
nonetheless are consistent with a contemporary understanding of human vision. Here we propose
that artists in the late ‘early modern’ period developed the technique of textural agency —
involving selective variation in image detail — to guide the observer’s eye and thereby influence
the viewing experience. The paper begins by establishing the well-known use of textural agency
among modern portrait artists, before considering the possibility that Rembrandt developed these
techniques in his late portraits in reaction to his Italian contemporaries. A final section brings the
argument full circle, with the presentation of laboratory evidence that Rembrandt’s techniques
indeed guide the modern viewer’s eye in the way we propose.

Document type: 

Simulating face to face collaboration for interactive learning systems

Date created: 

The use of Problem-Based Learning (PBL) in medical education and other educational settings has escalated. PBL's strength in learning is mostly due to its collaborative and open-ended problem solving approach. Traditional PBL was designed to be used in live team environments rather than in an online setting. We describe research that allows for web-based PBL via geographically distributed physical locations that emphasize PBL's collaboration and open brainstorming approach using interactive web, gaming and simulation techniques. We describe Interactive Face Animation - Comprehensive Environment (iFACE) which allows for expressive voice based character agents along with Collaborative Online Multimedia Problem-based Simulation Software (COMPS) which integrates iFace within a customizable web-based collaboration system. COMPS creates an XML-based multimedia communication medium that is effective for group based case presentations, discussions and other PBL activities.

Document type: 
Conference presentation

Face modeling and animation language for MPEG-4 XMT framework

Date created: 

This paper proposes FML, an XML-based face modeling and animation language. FML provides a structured content description method for multimedia presentations based on face animation. The language can be used as direct input to compatible players, or be compiled within MPEG-4 XMT framework to create MPEG-4 presentations. The language allows parallel and sequential action description, decision-making and dynamic event-based scenarios, model configuration, and behavioral template definition. Facial actions include talking, expressions, head movements, and low-level MPEG-4 FAPs. The ShowFace and iFACE animation frameworks are also reviewed as example FML-based animation systems.

Document type: 

Face, portrait, mask: The virtuality of the synthetic face

Date created: 

With new technological artistic tools that allow us to author 3D computer generated faces that can be as real or as abstract or as iconified as we choose, what aesthetic and cultural communication language do we elicit? Is it the historically rich language of the fine art portrait ? the abstracted artifact of the human face? What happens when this portrait animates, conveying lifelike human facial emotion ? does it cease to be a portrait and instead moves into the realm of embodied face ? when it begins to emote and feel and possibly react to the viewer? Is it then more in the language of the animated character, or as we make it photo-realistic, the language of the video actor with deep dramatic back-story or simply as real as a person on the other side of the screen? A viewer can not be rude to a portrait but can feel that they are being rude to an interactive character in an art installation. When does it become not an embodied face nor portrait but a mask ? the icon that speaks of face but is never embodied? Masks also have a deep cultural, historic and ethnic language far different than that of human faces or art portraits. More eastern compared to the western portrait. Iconized faces (i.e. the smiley face or the emoticon face) takes the mask full through to the western modern world of technology.

Document type: