On-line Development of Man-Machine Relationships: Motivation-driven Musical Interaction (original) (raw)

Interactive Composing as the Expression of Autonomous Machine Motivations

2009

This paper documents a novel model supporting rewarding musical human-machine interaction based on the idea of mutual influence rather than the specification of explicit, scripted interaction protocols. A Biology inspired computational model is suggested containing networks for listening, playing and the unsupervised synthesis of autonomous machine motivations. Motivations are assembled from non-linear relationships that interpret external changes, implemented in the drive object. A drive keeps two competing motivations; (1) integration with a human-suggested context or (2) expression of a native character. A population of musical processing functions is evolved online as to offer musical expertise to fulfil the systems’ implicit goal i.e., integration or expression. The shifting musical distance between consecutive statements by human and machine is traced in time as to derive a fitness measure for the musical processing functions currently in use; the service they propose for atta...

DOI: 10.1007/978-3-642-12337-5_10 Interaction with Machine Improvisation

2012

Abstract. We describe two multi-agent architectures for an improvisation oriented musician-machine interaction systems that learn in real time from human performers. The improvisation kernel is based on sequence modeling and statistical learning. We present two frameworks of interaction with this kernel. In the first, the stylistic interaction is guided by a human operator in front of an interactive computer environment. In the second framework, the stylistic interaction is delegated to machine intelligence and therefore, knowledge propagation and decision are taken care of by the computer alone. The first framework involves a hybrid architecture using two popular composition/performance environments, Max and OpenMusic, that are put to work and communicate together, each one handling the process at a different time/memory scale. The second framework shares the same representational schemes with the first but uses an Active Learning architecture based on collaborative, competitive an...

Motivated Learning in Human-Machine Improvisation

2018

This paper describes a machine learning approach in the context of non-idiomatic human-machine improvisation. In an attempt to avoid explicit mapping of user actions to machine responses, an experimental machine learning strategy is suggested where rewards are derived from the implied motivation of the human interactor – two motivations are at work: integration (aiming to connect with machine generated material) and expression (independent activity). By tracking consecutive changes in musical distance (i.e. melodic similarity) between human and machine, such motivations can be inferred. A variation of Q-learning is used featuring a self-optimizing variable length state-action-reward list. The system (called Pock) is tunable into particular behavioral niches by means of a limited number of parameters. Pock is designed as a recursive structure and behaves as a complex dynamical system. When tracking systems variables over time, emergent non-trivial patterns reveal experimental evidenc...

The Emergent Musical Environments: An Artificial Life Approach

Proceedings of ECAL 2007 Workshop on Music and Artificial Life (MusicAL 2007), Lisbon, Portugal, 2007

Our research is aimed at investigating the genesis and development of musical styles in artificial worlds. Focusing on the analysis of piano improvisation, we have designed and implemented a computer system (Interactive Musical Environments - iMe) with which we analyse processes involved in music perception and cognition in order to evaluate how musical influence can lead to particular musical worldviews. iMe also entails interaction between software agents and human pianists playing improvised music. In this paper we discuss the main components and algorithms that comprise the system.

A real-time genetic algorithm in human-robot musical improvisation

2008

The paper describes an interactive musical system that utilizes a genetic algorithm in an effort to create inspiring collaborations between human musicians and an improvisatory robotic xylophone player. The robot is designed to respond to human input in an acoustic and visual manner, evolving a human-generated phrase population based on a similarity driven fitness function in real time.

Jive: A generative, interactive, virtual, evolutionary music system

Applications of Evolutionary …, 2010

A novel paradigm and system for interactive generative music are described. Families of musical pieces are represented as functions of a time variable and several variables under user control. Composition/performance proceeds in the following two stages. Interactive grammatical evolution is used to represent, explore, and optimise the possible functions. The computer mouse or a Wii-controller can be used for realtime interaction with the generative process. We present rationale for design decisions and several pieces of example music.

Interaction with Machine Improvisation

The Structure of Style, 2010

We describe two multi-agent architectures for an improvisation oriented musician-machine interaction systems that learn in real time from human performers. The improvisation kernel is based on sequence modeling and statistical learning. We present two frameworks of interaction with this kernel. In the first, the stylistic interaction is guided by a human operator in front of an interactive computer environment. In the second framework, the stylistic interaction is delegated to machine intelligence and therefore, knowledge propagation and decision are taken care of by the computer alone. The first framework involves a hybrid architecture using two popular composition/performance environments, Max and OpenMusic, that are put to work and communicate together, each one handling the process at a different time/memory scale. The second framework shares the same representational schemes with the first but uses an Active Learning architecture based on collaborative, competitive and memory-based learning to handle stylistic interactions. Both systems are capable of processing real-time audio/video as well as MIDI. After discussing the general cognitive background of improvisation practices, the statistical modelling tools and the concurrent agent architecture are presented. Then, an Active Learning scheme is described and considered in terms of using different improvisation regimes for improvisation planning. Finally, we provide more details about the different system implementations and describe several performances with the system.

Evolving Expressive Music Performance through Interaction of Artificial Agent Performers

Proceedings of European Conference on Artificial Life (ECAL) Workshop of Music and Artificial Life, Lisbon, Portugal, 2007

We propose a model of expressive music performance (EMP), focusing on the emergence of EMP under social pressure, including social interaction and generational inheritance. Previously, we have reported a system to evolve EMP using Genetic Algorithm, exploring the effect of generational inheritance. This paper presents a system that evolves expressive performance profiles through social interaction, with a society of artificial agent performers. Each performer owns a hierarchical pulse set (i.e., hierarchical duration vs. amplitude matrices), representing a performance profile for a given piece. An agent performer evaluates a performance profile with a set of rules derived from the structure of the piece in question, and imitates others’ performances if appropriate. Then it modifies its pulse set accordingly. We demonstrate that suitable performance profiles emerge from social interactions where the diversity and the commonality of evolved performances are observed in the society of agents.