In A State: Live Emotion Detection and Visualisation for Music Performance (original) (raw)
Related papers
Emotional Data in Music Performance: Two Audio Environments for the Emotional Imaging Composer
Proceedings of the 3rd International Conference on Music & Emotion, 2013
Technologies capable of automatically sensing and recognizing emotion are becoming increasingly prevalent in performance and compositional practice. Though these technologies are complex and diverse, we present a typology that draws on similarities with computational systems for expressive music performance. This typology provides a framework to present results from the development of two audio environments for the Emotional Imaging Composer, a commercial product for realtime arousal/valence recognition that uses signals from the autonomic nervous system. In the first environment, a spectral delay processor for live vocal performance uses the performer's emotional state to interpolate between subspaces of the arousal/valence plane. For the second, a sonification mapping communicates continuous arousal and valence measurements using tempo, loudness, decay, mode, and roughness. Both were informed by empirical research on musical emotion, though differences in desired output schemas manifested in different mapping strategies.
Playing With Affect: Music performance with awareness of score and audience
An exquisite music performance can move an audience to tears of joy, or those of sorrow. The job of a good performer then is to convey not just the music’s notated structure, but also its emotional metadata. In time, the performer will also learn to respond musically to the state of a live audience; matching their emotional ebb and flow to maintain interest for the concert’s entirety. This work will discuss an Affective Performance framework in which compositions are marked up with emotional intent, or narrative. This mark-up directs the emotive adaptation of the symbolic score’s reproduction, enhancing the computer music’s realism. Examining the cognitive model of emotions appraisal theory, we highlight some key evidence underlying the principle of music expectancy and its central role in maintaining a listener’s musical interest. This knowledge will play a significant role in developing an adaptive music engine in which the traditionally static score is manipulated in real time. We will also show how new Affective Computing technologies for reading the emotional state of users can be employed to further adapt the performance to account for audience emotional state. We end with an illustration of the framework, with examples in computer gaming engines and enhanced live performances for a distributed audience.
2013
A system is presented for detecting common gestures, musical intentions and emotions of pianists in real-time using kinesthetic data retrieved by wireless motion sensors. The algorithm can detect six performer intended emotions such as cheerful, mournful, and vigorous, completely and solely based on low-sample-rate motion sensor data. The algorithm can be trained in real-time or can work based on previous training sets. Based on the classification, the system offers feedback in by mapping the emotions to a color set and presenting them as a flowing emotional spectrum on the background of a piano roll. It also presents a small circular object floating in the emotion space of Hevner’s adjective circle. This allows a performer to get real-time feedback regarding the emotional content conveyed in the performance. The system was trained and tested using the standard paradigm on a group of pianists, detected and displayed structures and emotions, and it provided some insightful results an...
Visualizing Emotion in Musical Performance Using a Virtual Character
2005
We describe an immersive music visualization application which enables interaction between a live musician and a responsive virtual character. The character reacts to live performance in such a way that it appears to be experiencing an emotional response to the music it ‘hears.’ We modify an existing tonal music encoding strategy in order to define how the character perceives and organizes musical information. We reference existing research correlating musical structures and composers’ emotional intention in order to simulate cognitive processes capable of inferring emotional meaning from music. The ANIMUS framework is used to define a synthetic character who visualizes its perception and cognition of musical input by exhibiting responsive behaviour expressed through animation.
EmoteControl: an interactive system for real-time control of emotional expression in music
Personal and Ubiquitous Computing, 2020
Several computer systems have been designed for music emotion research that aim to identify how different structural or expressive cues of music influence the emotions conveyed by the music. However, most systems either operate offline by pre-rendering different variations of the music or operate in real-time but focus mostly on structural cues. We present a new interactive system called EmoteControl, which allows users to make changes to both structural and expressive cues (tempo, pitch, dynamics, articulation, brightness, and mode) of music in real-time. The purpose is to allow scholars to probe a variety of cues of emotional expression from non-expert participants who are unable to articulate or perform their expression of music in other ways. The benefits of the interactive system are particularly important in this topic as it offers a massive parameter space of emotion cues and levels for each emotion which is challenging to exhaustively explore without a dynamic system. A brie...
Emotion Based Music Playing Device
2020
1Students, Department of Computer Science, ITM Universe, Vadodara, Gujarat, India 2Assistant Professor, Department of Computer Science, ITM Universe, Vadodara, Gujarat, India 3Assistant Professor, Department of Electronics and Communication, ITM Universe, Vadodara, Gujarat, India ---------------------------------------------------------------------***--------------------------------------------------------------------Abstract Human expression plays a vital role in determining the current state and mood of an individual, it helps in extracting and understanding the emotion that an individual has based on various features of the face such as eyes, cheeks, forehead or even through the curve of the smile. Music is basically an art form that soothes and calms human brain and body.[1] Taking these two aspects and blending them together our project deals with detecting emotion of an individual through facial expression and playing music according to the mood detected that will alleviate th...
Gesture and Emotion in Interactive Music: Artistic and Technologial Challenges
This dissertation presents a new and expanded context for interactive music based on Moore’s model for computer music (Moore 1990) and contextualises its findings using Lesaffre’s taxonomy for musical feature extraction and analysis (Lesaffre et al. 2003). In doing so, the dissertation examines music as an expressive art-form where musically significant data is present not only in the audio signal but also in human gestures and in physiological data. The dissertation shows the model’s foundation in human perception of music as a performed art, and points to the relevance and feasibility of including expression and emotion as a high-level signal processing means for bridging man and machine. The resulting model is multi-level (physical, sensorial, perceptual, formal, expressive) and multi-modal (sound, human gesture, physiological) which makes it applicable to purely musical contexts, as well as intermodal contexts where music is combined with visual and/or physiological data. The model implies evaluating an interactive music system as a musical instrument design. Several properties are examined during the course of the dissertation and models based on acoustic music instruments have been avoided due to the expanded feature set of interactive music system. A narrowing down of the properties is attempted in the dissertation’s conclusion together with a preliminary model circumscription. In particular it is pointed out that high-level features of real-time analysis, data storage and processing, and synthesis makes the system a musical instrument, and that the capability of real-time data storage and processing distinguishes the digital system as an unprecedented instrument, qualitatively different from all previous acoustic music instrument. It is considered that a digital system’s particular form of sound synthesis only qualifies it as being of a category parallel to the acoustic instruments categories. The model is the result of the author’s experiences with practical work with interactive systems developed 2001-06 for a body of commissioned works. The systems and their underlying procedures were conceived and developed addressing needs inherent to the artistic ambitions of each work, and have all been thoroughly tested in many performances. The papers forming part of the dissertation describe the artistic and technological problems and their solutions. The solutions are readily expandable to similar problems in other contexts, and they all relate to general issues of their particular applicative area.
EMuJoy: Software for continuous measurement of perceived emotions in music
Behavior Research Methods, 2007
Since Kate Hevner's early investigations on the perception of emotions while listening to music (Hevner, 1936), there have been many different approaches to the measurement of emotions. For example, Gabrielsson and Lindström Wik (2003) investigated musical expression by describing the verbal reports given by subjects. Most researchers have used distinct adjective scales for the rating of perceived or expressed emotions. Schlosberg (1954) found that such scales can be mapped onto two or three dimensions. Using these methods, self-reported data were collected on distinct and mostly nonequidistant points in time, which were chosen intuitively by the researcher. The experience of emotion in music and films unfolds over time: This has led to increasing interest in the continuous measurement of perceived emotions, made possible by technological developments since the 1990s. Subjects' responses can now be recorded in real time and synchronized to the stimuli. Schubert (1996, 2001/2002, 2004a, 2004b) was one of the first researchers to develop software that focuses on the perception of the temporal dynamics of emotion. However, up to now, methods for the recording of continuous responses were based on researcher-developed software solutions, and there was no agreement on the technical means, interfaces, or methods to use. The main aim of this contribution to the ongoing discussion is to propose standardized methods for the continuous measurement of self-reported emotions. The authors have designed new software, EMuJoy, for this purpose. It is freeware and can be distributed and used for research. Before describing our integrated software solution, we will address four questions: (1) the dimensionality of the emotion space, (2) technical aspects of data recording, (3) construction of the interface, and (4) the use of multiple stimulus modalities. Dimensionality of the Emotion Space Utilizing similarity ratings of affect-descriptive terms, Russell (1980) demonstrated in his circumplex model of affects that such terms can be mapped onto a two-dimensional space. He analyzed the similarity matrix, resulting in a twodimensional space containing the affective terms. This space was then scaled to fit the dimensions pleasure-displeasure (i.e., valence) and degree of arousal. Some researchers also use a third dimension-namely, dominance (Russell & Mehrabian, 1977). However, arousal and valence appear to be sufficient to explain most of the variance of affective scales (Lang, 1995). Moreover, the use of a computer monitor restricts the software to two dimensions, of which valence and arousal appear to be the most important and universal (Russell, 1983). The use of other dimensions, such as pleasantness and liking, has recently been discussed by Ritossa and Rickard (2004). With respect to emotions in music, Schubert used Russell's model of two basic emotional dimensions in his Emotionspace Lab (2DES; Schubert, 1996). He proved, as Russell did, the validity and reliability of arousal and valence with linguistic expressions, and he also had subjects estimate the expressed emotions in musical pieces. From this investigation, he found that validity and reliability of estimated emotions exist in music as well as in terms or adjectives. A similar two-dimensional emotion space was also applied to the modality of vision-namely, in the International 283
Emotion rendering in music: range and characteristic values of seven musical variables
Cortex, 2011
Many studies on the synthesis of emotional expression in music performance have focused on the effect of individual performance variables on perceived emotional quality by making a systematical variation of variables. However, most of the studies have used a predetermined small number of levels for each variable, and the selection of these levels has often been done arbitrarily. The main aim of this research work is to improve upon existing methodologies by taking a synthesis approach. In a production experiment, 20 performers were asked to manipulate values of 7 musical variables simultaneously (tempo, sound level, articulation, phrasing, register, timbre, and attack speed) for communicating 5 different emotional expressions (neutral, happy, scary, peaceful, sad) for each of 4 scores. The scores were compositions communicating four different emotions (happiness, sadness, fear, calmness). Emotional expressions and music scores were presented in combination and in random order for each performer for a total of 5 4 stimuli. The experiment allowed for a systematic investigation of the interaction between emotion of each score and intended expressed emotions by performers. A two-way analysis of variance (ANOVA), repeated measures, with factors emotion and score was conducted on the participants’ values separately for each of the seven musical factors. There are two main results. The first one is that musical variables were manipulated in the same direction as reported in previous research on emotional expressive music performance. The second one is the identification for each of the five emotions the mean values and ranges of the five musical variables tempo, sound level, articulation, register, and instrument. These values resulted to be independent from the particular score and its emotion. The results presented in this study therefore allow for both the design and control of emotionally expressive computerized musical stimuli that are more ecologically valid than stimuli without performance variations.