I-SOUNDS: Emotion-Based Music Generation for Virtual Environments (original) (raw)
Related papers
Live Soundscape Composition Based on Synthetic Emotions
IEEE Multimedia, 2003
We conceived Ada: Intelligent Space exhibit as an artificial organism, integrating a large number of sensory modalities, and let it interact with visitors using a multitude of effector systems. Ada used a language of sound and light to communicate its moods, emotions, and behaviors. Here we describe the mechanisms behind Ada's sound communication, its real-time performance, and its interpretation by human subjects.
Malakai: Music That Adapts to the Shape of Emotions
ArXiv, 2021
This is a strange and exciting time for computer-generated music. The idea of computer-generated musical composition has captured the public imagination, as far back as Kurzweil’s demonstration of a pattern-based composer on live TV in 1965[1]. Since then, improvements in technology and composition tools have created whole musical genres based around computer-generated compositions, and have resulted in a vast library of algorithmic compositional techniques. Furthermore, in the past few decades, interactive media such as games and virtual reality have resulted in a demand for music that can adapt to dynamic circumstances presented within the interactive medium. Finally, the advent of ML music models such as Google Magenta’s MusicVAE[6] now allow us to extract and replicate compositional features from otherwise complex datasets. These models allow computational composers to parameterize abstract variables such as style and mood.
2012
In this paper, we examine the essential role of instrument timbre in communicating and eliciting emotion in music, and consider how this knowledge can inform the design process of sound synthesis towards achieving maximum expression. In the composition and performance of acoustic music, timbres are carefully chosen to capture and communicate the emotional undercurrent of the musical material. Instrumentalists master a diverse palette of tone‐colours to increase their versatility in interpreting meaning and emotion in their music, and composers painstakingly consider instrument choice in search of the best voice to express their ideas. We believe that sound in computer music, with its infinite timbral choices available through digital sound synthesis, requires at least equal (if not more) attention and consideration towards emotional quality and depth. As we undergo a process of designing original sounds that will be included in Vuzik, our child‐friendly digital music composition int...
The Music and Emotion Driven Game Engine: Ideas and Games
In this paper we describe the ideas behind the Music and Emotion Driven Game Engine (M-EDGE), currently under development at the School of Interactive and Digital Media in Nanyang Polytechnic and fully supported by the Singapore National Research Foundation. The paper will explain a possible method for analyzing emotional content in music in real time and how it can successfully be applied to different game ideas to help defining a new interactive experience and music based gameplay in videogames.
A Perceptual and Affective Evaluation of an Affectively-Driven Engine for Video Game Soundtracking
We report on a player evaluation of a pilot system for dynamic video game soundtrack generation. The system being evaluated generates music using an AI-based algorithmic composition technique to create score in real-time, in response to a continuously varying emotional trajectory dictated by gameplay cues. After a section of gameplay, players rated the system on a Likert scale according to emotional congruence with the narrative, and also according to their perceived immersion with the gameplay. The generated system showed a statistically meaningful and consistent improvement in ratings for emotional congruence, yet with a decrease in perceived immersion, which might be attributed to the marked difference in instrumentation between the generated music, voiced by a solo piano timbre, and the original, fully orchestrated soundtrack. Finally, players rated selected stimuli from the generated soundtrack dataset on a two-dimensional model reflecting perceived valence and arousal. These ratings were compared to the intended emotional descriptor in the meta-data accompanying specific gameplay events. Participant responses suggested strong agreement with the affective correlates, but also a significant amount of inter-participant variability. Individual calibration of the musical feature set, or further adjustment of the musical feature set are therefore suggested as useful avenues for further work. A perceptual and affective evaluation of an affectively-driven engine for video game soundtracking. ACM
The soundtrack of your mind: mind music-adaptive audio for game characters
Proceedings of the 2006 ACM …, 2006
In this paper we describe an experimental application for individualized adaptive music for games. Expression of emotion is crucial for increasing believability. Since a fundamental aspect of music is it's ability to express emotions research into the area of believable agents can benefit from exploring how music can be used. In our experiment we use an affective model that can be integrated to player characters. Music is composed to reflect the affective processes of mood, emotion, and sentiment. The composition takes results of empirical studies regarding the influence of different factors in musical structure on perceived musical expression into account. The musical output from the test application varies in harmony and time signature along a matrix of moods, moods that change depending on what emotions are activated during game play.
ONE‟ S OWN SOUNDTRACK: AFFECTIVE MUSIC SYNTHESIS
Computer music usually sounds mechanical; hence, if musicality and music expression of virtual actors could be enhanced according to the user's mood, the quality of experience would be amplified. We present a solution that is based on improvisation using cognitive models, case based reasoning (CBR) and fuzzy values acting on close-to-affect-target musical notes as retrieved from CBR per context. It modifies music pieces according to the interpretation of the user's emotive state as computed by the emotive input acquisition componential of the CALLAS framework. The CALLAS framework incorporates the Pleasure-Arousal-Dominance (PAD) model that reflects emotive state of the user and represents the criteria for the music affectivisation process. Using combinations of positive and negative states for affective dynamics, the octants of temperament space as specified by this model are stored as base reference emotive states in the case repository, each case including a configurable ...
COMPUTER-GENERATING EMOTIONAL MUSIC: THE DESIGN OF AN AFFECTIVE MUSIC ALGORITHM
This paper explores one way to use music in the context of affective design. We've made a real-time music generator that is designed around the concepts of valence and arousal, which are two components of certain models of emotion. When set to a desired valence and arousal, the algorithm plays music corresponding to the intersection of these two parameters. We designed our algorithm using psychological theory of emotion and parametrized features of music which have been tested for affect. The results are a modular algorithm design, in which our parameters can be implemented in other affective music algorithms. We describe our implementation of these parameters, and our strategy for manipulating the parameters to generate musical emotion. Finally we discuss possible applications for these techniques in the fields of the arts, medical systems, and research applications. We believe that further work will result in a music generator which can produce music in any of a wide variety of commonly-perceived emotional connotations on command.