SFU Search
While rotating visual and auditory stimuli have long been known to elicit self-motion illusions (“circular vection”), audiovisual interactions have hardly been investigated. Here, two experiments investigated whether visually induced circular vection can be enhanced by concurrently rotating auditory cues that match visual landmarks (e.g., a fountain sound). Participants sat behind a curved projection screen displaying rotating panoramic renderings of a market place. Apart from a no-sound condition, headphone-based auditory stimuli consisted of mono sound, ambient sound, or low-/high-spatial resolution auralizations using generic head-related transfer functions (HRTFs). While merely adding nonrotating (mono or ambient) sound showed no effects, moving sound stimuli facilitated both vection and presence in the virtual environment. This spatialization benefit was maximal for a medium (20 degrees × 15 degrees) FOV, reduced for a larger (54 degrees × 45 degrees) FOV and unexpectedly absent for the smallest (10 degrees × 7.5 degrees) FOV. Increasing auralization spatial fidelity (from low, comparable to five-channel home theatre systems, to high, 5 degree resolution) provided no further benefit, suggesting a ceiling effect. In conclusion, both self-motion perception and presence can benefit from adding moving auditory stimuli. This has important implications both for multimodal cue integration theories and the applied challenge of building affordable yet effective motion simulators.
Recent enhancements in real-time graphics have facilitated the design of high fidelity game environments with complex 3D worlds inhabited by animated characters. Under such settings, it is hard, especially for the untrained eyes, to attend to an object of interest. Neuroscience research as well as film and theatre practice identified several visual properties, such as contrast, orientation, and color that play a major role in channeling attention. In this paper, we discuss an adaptive lighting design system called ALVA (Adaptive Lighting for Visual Attention) that dynamically adjusts the lighting color and brightness to enhance visual attention within game environments using features identified by neuroscience, psychophysics, and visual design literature. We also discuss some preliminary results showing the utility of ALVA in directing player’s attention to important elements in a fast paced 3D game, and thus enhancing the game experience especially for non-gamers who are not visually trained to spot objects or characters in such complex 3D worlds.
Dancers express their feelings and moods through gestures and body movements. We seek to extend this mode of expression by dynamically and automatically adjusting music and lighting in the dance environment to reflect the dancer’s arousal state. Our intention is to offer a space that performance artists can use as a creative tool that extends the grammar of dance. To enable the dynamic manipulation of lighting and music, the performance space will be augmented with several sensors: physiological sensors worn by a dancer to measure her arousal state, as well as pressure sensors installed in a floor mat to track the dancers’ locations and movements. Data from these sensors will be passed to a three layered architecture. Layer 1 is composed of a sensor analysis system that analyzes and synthesizes physiological and pressure sensor signals. Layer 2 is composed of intelligent systems that adapt lighting and music to portray the dancer’s arousal state. The intelligent on-stage lighting system dynamically adjusts on-stage lighting direction and color. The intelligent virtual lighting system dynamically adapts virtual lighting in the projected imagery. The intelligent music system dynamically and unobtrusively adjusts the music. Layer 3 translates the high-level adjustments made by the intelligent systems in layer 2 to appropriate lighting board, image rendering, and audio box commands. In this paper, we will describe this architecture in detail as well as the equipment and control systems used.
Although moving auditory cues have long been known to induce self-motion illusions (“circular vection”) in blindfolded participants, little is known about how spatial sound can facilitate or interfere with vection induced by other non-visual modalities like biomechanical cues. To address this issue, biomechanical circular vection was induced in seated, stationary participants by having them step sideways along a rotating floor (“circular treadmill”) turning at 60 /s (see Fig. 1, top). Three research hypotheses were tested by comparing four different sound conditions in combination with the same biomechanical vection-inducing stimulus.
The advances of generative and parametric CAD tools have enabled designers to create designs representations that are responsive, adoptable and flexible. However, the complexity of the models and limitation of human-visual systems posed challenges in effectively utilizing them for sensitivity analysis. In this prototyping study, we propose a method that aims at reduction of these challenges. The method proposes to improve visually analysing sensitivity of a design model to changes. It adapts Model-View-Controller approach in software design to decouple control and visualization features from the design model while providing interfaces between them through parametric associations. The case studies is presented to demonstrate applicability and limitation of the method.
In this study, we experimentally evaluated two GUI prototypes (named "split" and "integrated") equivalent to those used in the domain of parametric CAD modeling. Participants in the study were asked to perform a number of 3D model comprehension tasks, using both interfaces. The tasks themselves were classified into three classes: parameterization, topological and geometrical tasks. We measured the task completion times, error rates, and user satisfaction for both interfaces. The experimental results showed that task completion times are significantly shorter when the "split" interface is used, in all cases of interest: 1) tasks taken as a whole and 2) tasks viewed by task type. There was no significant difference in error rates; however, error rate was significantly higher in the case of parameterization tasks for both interfaces. User satisfaction was significantly higher for the "split" interface. The study gave us a better understanding of the human performance when perceiving and comprehending parametric CAD models, and offered insight into the usability aspects of the two studied interfaces; we also believe that the knowledge obtained could be of practical utility to implementers of parametric CAD modeling packages.
This essay traverses a heterogeneous terrain, finding important links in the ideas of Jacques Derrida and John Cage, and relating these to diverse cultural topics such as film soundtrack design, audio art, Saussurian linguistics, the sound and light shows at the Egyptian pyramids, the analogic nature of digital information, and cybernetics. Furthermore, the essay attempts to create some bridges - through the concept of "perceptual differance" - between the divergent world pictures (to use Heidegger's term) of cognitive psychology (with its quantitative frame of analysis) and the more slippery domain of hermeneutics.
The design goal for OnLive’s Internet-based Virtual Community system was to develop avatars and virtual communities where the participants sense a tele-presence – that they are really there in the virtual space with other people. This collective sense of "being-there" does not happen over the phone or with teleconferencing; it is a new and emerging phenomenon, unique to 3D virtual communities. While this group presence paradigm is a simple idea, the design and technical issues needed to begin to achieve this on internet-based, consumer PC platforms are complex. This design approach relies heavily on the following immersion-based techniques: · 3D distanced attenuated voice and sound with stereo "hearing" · a 3D navigation scheme that strives to be as comfortable as walking around · an immersive first person user interface with a human vision camera angle · individualized 3D head avatars that breathe, have emotions and lip sync · 3D space design that is geared toward human social interaction.
Programmatic Formation explores design as a responsive process.The study we present engages the complexity of the surroundings using parametric and generative design methods. It illustrates that responsiveness of designs can be achieved beyond geometric explorations.The parametric models can combine and respond simultaneously to design and its programmatic factors, such as performance-sensitive design-decisions, and constraints.We demonstrate this through a series of case studies for a housing tower.The studies explore the extent to which non-spatial parameters can be incorporated into spatial parametric dependencies in design.The results apply digital design and modeling, common to the curriculum of architecture schools, to the practical realm of building design and city planning.While practitioners are often slow to include contemporary design and planning methods into their daily work, the research illustrates how the incorporation of skills and knowledge acquired as part of university education can be effectively incorporated into everyday design and planning.
Interactive Face Animation - Comprehensive Environment (iFACE) is a general-purpose software framework that encapsulates the functionality of “face multimedia object” for a variety of interactive applications such as games and online services. iFACE exposes programming interfaces and provides authoring and scripting tools to design a face object, define its behaviours, and animate it through static or interactive situations. The framework is based on four parameterized spaces of Geometry, Mood, Personality, and Knowledge that together form the appearance and behaviour of the face object. iFACE can function as a common “face engine” for design and runtime environments to simplify the work of content and software developers.