From expressive gesture to sound: the development of an embodied mapping trajectory inside a musical interface (original) (raw)
Related papers
From expressive gesture to sound
Journal on Multimodal User Interfaces, 2009
This paper contributes to the development of a multimodal, musical tool that extends the natural action range of the human body to communicate expressiveness into the virtual music domain. The core of this musical tool consists of a low cost, highly functional computational model developed upon the Max/MSP platform that (1) captures real-time movement of the human body into a 3D coordinate system on the basis of the orientation output of any type of inertial sensor system that is OSC-compatible, (2) extract low-level movement features that specify the amount of contraction/expansion as a measure of how a subject uses the surrounding space, (3) recognizes these movement features as being expressive gestures, and (4) creates a mapping trajectory between these expressive gestures and the sound synthesis process of adding harmonic related voices on an in origin monophonic voice. The concern for a user-oriented and intuitive mapping strategy was thereby of central importance. This was achieved by conducting an empirical experiment based on theoretical concepts from the embodied music cognition paradigm. Based on empirical evidence, this paper proposes a mapping trajectory that fa-P.-J. Maes ( ) · M. Leman · M. Lesaffre · M. Demey · cilitates the interaction between a musician and his instrument, the artistic collaboration between (multimedia) artists and the communication of expressiveness in a social, musical context.
This work describes a new approach to gesture mapping in a performance with a traditional musical instrument and live electronics inspired by theories of embodied music cognition (EMC) and musical gestures. Considerations on EMC and how gestures a ect the experience of music inform di erent mapping strategies. Our intent is to enhance the expressiveness and the liveness of performance by tracking gestures via a multimodal motion capture system and to use motion data to control several features of the music. We then describe an application of such approach to a performance with electric guitar and live electronics, focusing both on aspects of meaning formation and motion capturing.
Towards An Affective Gesture Interface For Expressive Music Performance
2008
This paper discusses the use of 'Pogany', an affective anthropomorphic interface, for expressive music performance. For this purpose the interface is equipped with a module for gesture analysis: a) in a direct level, in order to conceptualize measures capable of driving continuous musical parameters, b) in an indirect level, in order to capture high-level information arising from 'meaningful' gestures. The real-time recognition module for hand gestures and postures is based on Hidden Markov Models (HMMs). After an overview of the interface, we analyze the techniques used for gesture recognition and the decisions taken for mapping gestures with sound synthesis parameters. For the evaluation of the system as an interface for musical expression we made an experiment with real subjects. The results of this experiment are presented and analyzed.
Gesture and Emotion in Interactive Music: Artistic and Technologial Challenges
This dissertation presents a new and expanded context for interactive music based on Moore’s model for computer music (Moore 1990) and contextualises its findings using Lesaffre’s taxonomy for musical feature extraction and analysis (Lesaffre et al. 2003). In doing so, the dissertation examines music as an expressive art-form where musically significant data is present not only in the audio signal but also in human gestures and in physiological data. The dissertation shows the model’s foundation in human perception of music as a performed art, and points to the relevance and feasibility of including expression and emotion as a high-level signal processing means for bridging man and machine. The resulting model is multi-level (physical, sensorial, perceptual, formal, expressive) and multi-modal (sound, human gesture, physiological) which makes it applicable to purely musical contexts, as well as intermodal contexts where music is combined with visual and/or physiological data. The model implies evaluating an interactive music system as a musical instrument design. Several properties are examined during the course of the dissertation and models based on acoustic music instruments have been avoided due to the expanded feature set of interactive music system. A narrowing down of the properties is attempted in the dissertation’s conclusion together with a preliminary model circumscription. In particular it is pointed out that high-level features of real-time analysis, data storage and processing, and synthesis makes the system a musical instrument, and that the capability of real-time data storage and processing distinguishes the digital system as an unprecedented instrument, qualitatively different from all previous acoustic music instrument. It is considered that a digital system’s particular form of sound synthesis only qualifies it as being of a category parallel to the acoustic instruments categories. The model is the result of the author’s experiences with practical work with interactive systems developed 2001-06 for a body of commissioned works. The systems and their underlying procedures were conceived and developed addressing needs inherent to the artistic ambitions of each work, and have all been thoroughly tested in many performances. The papers forming part of the dissertation describe the artistic and technological problems and their solutions. The solutions are readily expandable to similar problems in other contexts, and they all relate to general issues of their particular applicative area.
Body and Space: Combining Modalities for Musical Expression
Work in Progress accepted at the Conference on Tangible, Embedded and Embodied Interaction (TEI2013)
This paper presents work in progress on applying a Multimodal interaction (MMI) approach to studying interactive music performance. We report on a study where an existing musical work was used to provide a gesture vocabulary. The biophysical sensing already used in the work was used as input modality, and augmented with several other input sensing modalities not in the original piece. The bioacoustics-based sensor, accelerometer sensors, and full-body motion capture system generated data recorded into a multimodal database. We plotted the data from the different modalities and offer observations based on visual analysis of the collected data. Our preliminary results show that there is complementarity of different forms in the information. We noted three types of complementarity: synchronicity, coupling, and correlation.
Kinematics-energy space for expressive interaction in music performance
A musical interpretation is often the result of a wide range of requirements on expressiveness rendering and technical skills. Aspects that are indicated with the term expressive intention and which refers to the communication of moods and feelings are being considered more and more important in performer-computer interaction during music performance. Recent studies demonstrated that by modifying opportunely systematic deviations introduced by the musician it is possible to convey different sensitive content like expressive intentions and emotions. We present an abstract space, that can be used for the user interface, which represents, at an abstract level, the expressive content and the interaction between the performer and the expressive engine. This space was derived by multidimensional analysis of perceptual tests on various professionally performed pieces ranging from western classical to popular music. This space reflects how the musical performances are organized in the listener's mind.
2014
This paper describes the implementation of gestural mapping strategies for performance with a traditional musical instrument and electronics. The approach adopted is informed by embodied music cognition and functional categories of musical gestures. Within this framework, gestures are not seen as means of control subordinated to the resulting musical sounds but rather as signi cant elements contributing to the formation of musical meaning similarly to auditory features. Moreover, the ecological knowledge of the gestural repertoire of the instrument is taken into account as it de nes the action-sound relationships between the instrument and the performer and contributes to form expectations in the listeners. Subsequently, mapping strategies from a case study of electric guitar performance will be illustrated describing what motivated the choice of a multimodal motion capture system and how di erent solutions have been adopted considering both gestural meaning formation and technical constraints.
A virtual head driven by music expressivity
Audio, Speech, and …, 2007
In this paper we present a system that visualizes the expressive quality of a music performance using a virtual head. We provide a mapping through several parameter spaces: on the input side, we have elaborated a mapping between values of acoustic cues and emotion as well as expressivity parameters; on the output side, we propose a mapping between these parameters and the behaviors of the virtual head. This mapping ensures a coherency between the acoustic source and the animation of the virtual head. After presenting some background information on behavior expressivity of humans we introduce our model of expressivity. We explain how we have elaborated the mapping between the acoustic and the behavior cues. Then we describe the implementation of a working system that controls the behavior of a human-like head that varies depending on the emotional and acoustic characteristics of the musical execution. Finally we present the tests we conducted to validate our mapping between the emotive content of the music performance and the expressivity parameters.
Interactive sonification of emotionally expressive gestures by means of music performance
Proc. of ISon, 2010
This study presents a procedure for interactive sonification of emotionally expressive hand and arm gestures by affecting a musical performance in real-time. Three different mappings are described that translate accelerometer data to a set of parameters that control the expressiveness of the performance by affecting tempo, dynamics and articulation. The first two mappings, tested with a number of subjects during a public event, are relatively simple and were designed by the authors using a top-down approach. According to user feedback, they were not intuitive and limited the usability of the software. A bottom-up approach was taken for the third mapping: a Classification Tree was trained with features extracted from gesture data from a number of test subject who were asked to express different emotions with their hand movements. A second set of data, where subjects were asked to make a gesture that corresponded to a piece of expressive music they just listened to, were used to validate the model. The results were not particularly accurate, but reflected the small differences in the data and the ratings given by the subjects to the different performances they listened to.
Role of Movement and Gesture in Communicating Music Expressiveness to an Audienc
Convergences - Journal of Research and Arts Education
Musical performative gestures are recognised by the majority of theoreticians as a critical factor of a musical performance. Gestures can be considered as operating features of a person’s perception-action system. It presupposes significance of a meaning that involves more than just a physical movement. Movements can be subdivided into specific patterns and conceptualised. Dynamics are one of the most relevant expressive element in music and they are strongly related to the physical musical action - sound producing gestures. Used effectively, dynamics allow sustain narrative pertinence in a musical performance, communicating for example a particular emotional state or feeling. For this research, solo percussion contemporary music performance was in focus and an audience divided in between “visual and non visual” listeners was studied. From this perspective, observation over percussionists’ playing manner and it ́s audience provides the researcher an opportunity to understand dynamic...