Using Music to Interact with a Virtual Character (original) (raw)
Related papers
Visualizing Emotion in Musical Performance Using a Virtual Character
2005
We describe an immersive music visualization application which enables interaction between a live musician and a responsive virtual character. The character reacts to live performance in such a way that it appears to be experiencing an emotional response to the music it ‘hears.’ We modify an existing tonal music encoding strategy in order to define how the character perceives and organizes musical information. We reference existing research correlating musical structures and composers’ emotional intention in order to simulate cognitive processes capable of inferring emotional meaning from music. The ANIMUS framework is used to define a synthetic character who visualizes its perception and cognition of musical input by exhibiting responsive behaviour expressed through animation.
Virtual character performance from speech
: Our method can synthesize a virtual character performance from only an audio signal and a transcription of its word content. The character will perform semantically appropriate facial expressions and body movements that include gestures, lip synchronization to speech, head movements, saccadic eye movements, blinks and so forth. Our method can be used in various applications, such as previsualization tools, conversational agents, NPCs in video games, and avatars for interactive applications.
A virtual-agent head driven by musical performance
2007
Abstract—In this paper we present a system in which visual feedback of an acoustic source is given to the user using a graphical representation of an expressive virtual head. In this system we also included the notion of expressivity of the human behavior. We provide several mapping. On the input side, we have elaborated a mapping between values of acoustic cues and emotion as well as expressivity parameters.
Realtime and Accurate Musical Control of Expression in Voice Synthesis
In this paper, we describe a full computer-based musical instrument allowing realtime synthesis of expressive singing voice. The expression results from the continuous action of an interpreter through a gestural control interface. In this context, expressive features of voice are discussed. New real-time implementations of a spectral model of glottal flow (CALM) are described. These interactive modules are then used to identify and quantify voice quality dimensions. Experiments are conducted in order to develop a first framework for voice quality control. The representation of vocal tract and the control of several vocal tract movements are explained and a solution is proposed and integrated. Finally, some typical controllers are connected to the system and expressivity is evaluated.
Interaction with a Virtual Character through Performance Based Animation
Lecture Notes in Computer Science, 2010
While performance based animation has been widely used in film and game production, we apply a similar technique for recreational/artistic performance purposes, allowing users to experience realtime natural interaction with virtual character. We present a real-time system that allows a user to interact with a synthetic virtual character animated based on the performance of a dancer who is hidden behind the scenes. The virtual character responds to the user as if she can "see" and "listen" to the user. By presenting only the virtual character animated by our system to the observing audiences within an immersive virtual environment, we create a natural interaction between the virtual world and the real world.
In a realtime interactive work for live performer and computer, the immanently human musical expression of the live performer is not easily equalled by algorithmically generated artificial expression in the computer sound. In cases when we expect the computer to display interactivity in the context of improvisation, pre-programmed emulations of expressivity in the computer are often no match for the charisma of an experienced improviser. This article proposes to achieve expressivity in computer sound by "stealing" expressivity from the live performer. By capturing, analyzing, and storing expressive characteristics found in the audio signal received from the acoustic instrument, the computer can use those same characteristic expressive sound gestures, either verbatim or with modifications. This can lead to a more balanced sense of interactivity in works for live performer and computer.
The soundtrack of your mind: mind music-adaptive audio for game characters
Proceedings of the 2006 ACM …, 2006
In this paper we describe an experimental application for individualized adaptive music for games. Expression of emotion is crucial for increasing believability. Since a fundamental aspect of music is it's ability to express emotions research into the area of believable agents can benefit from exploring how music can be used. In our experiment we use an affective model that can be integrated to player characters. Music is composed to reflect the affective processes of mood, emotion, and sentiment. The composition takes results of empirical studies regarding the influence of different factors in musical structure on perceived musical expression into account. The musical output from the test application varies in harmony and time signature along a matrix of moods, moods that change depending on what emotions are activated during game play.
Realtime and accurate musical control of expression in singing synthesis
Journal on Multimodal User Interfaces, 2007
In this paper, we describe a full computer-based musical instrument allowing realtime synthesis of expressive singing voice. The expression results from the continuous action of an interpreter through a gestural control interface. In this context, expressive features of voice are discussed. New real-time implementations of a spectral model of glottal flow (CALM) are described. These interactive modules are then used to identify and quantify voice quality dimensions. Experiments are conducted in order to develop a first framework for voice quality control. The representation of vocal tract and the control of several vocal tract movements are explained and a solution is proposed and integrated. Finally, some typical controllers are connected to the system and expressivity is evaluated.
2007
It is generally admitted that music is a powerful carrier of emotions , and that audition can play an important role in enhancing the sensation of presence in Virtual Environments . In mixed-reality environments and interactive multi-media systems such as Massively Multiplayer Online Games (MMORPG), the improvement of the user's perception of immersion is crucial. Nonetheless, the sonification of those environments is often reduced to its simplest expression, namely a set of prerecorded sound tracks. Background music many times relies on repetitive, predetermined and somewhat predictable musical material. Hence, there is a need for a sonification scheme that can generate context sensitive, adaptive, rich and consistent music in real-time. In this paper we introduce a framework for the sonification of spatial behavior of multiple human and synthetic characters in a Mixed-Reality environment.
Audio-driven emotional speech animation for interactive virtual characters
Computer Animation and Virtual Worlds
We present a procedural audio-driven speech animation method for interactive virtual characters. Given any audio with its respective speech transcript, we automatically generate lip-synchronized speech animation that could drive any three-dimensional virtual character. The realism of the animation is enhanced by studying the emotional features of the audio signal and its effect on mouth movements. We also propose a coarticulation model that takes into account various linguistic rules. The generated animation is configurable by the user by modifying the control parameters, such as viseme types, intensities, and coarticulation curves. We compare our approach against two lip-synchronized speech animation generators. Our results show that our method surpasses them in terms of user preference. KEYWORDS audio-driven speech animation, emotional speech, procedural animation This is an open access article under the terms of the Creative Commons Attribution-NonCommercial-NoDerivs License, which permits use and distribution in any medium, provided the original work is properly cited, the use is non-commercial, and no modifications or adaptations are made.