An Intelligent Music System to Perform Different “Shapes of Jazz—To Come” (original) (raw)

An Artificially Intelligent Jazz Performer

Journal of New Music Research, 1999

This paper presents an intelligent agent model for simulating the behavior of a jazz bass player during live performance. In jazz performance, there is a strikingly large gap between the instructions given in a chord grid and the music actually being played. To bridge this gap, we ...

NetNeg: A connectionist-agent integrated system for representing musical knowledge

Annals of Mathematics and Artificial Intelligence - AMAI, 1999

The system presented here shows the feasibility of modeling the knowledge involved in a complex musical activity by integrating sub-symbolic and symbolic processes. This research focuses on the question of whether there is any advantage in integrating a neural network together with a distributed artificial intelligence approach within the music domain. The primary purpose of our work is to design a model that describes the different aspects a user might be interested in considering when involved in a musical activity. The approach we suggest in this work enables the musician to encode his knowledge, intuitions, and aesthetic taste into different modules. The system captures these aspects by computing and applying three distinct functions: rules, fuzzy concepts, and learning. As a case study, we began experimenting with first species two-part counterpoint melodies. We have developed a hybrid system composed of a connectionist module and an agent-based module to combine the sub-symbol...

Music, Intelligence and Artificiality

2000

The discipline of Music-AI is defined as that activity which seeks to program computers to perform musical tasks in an intelligent, which possibly means humanlike way. A brief historical survey of different approaches within the discipline is presented. Two particular issues arise: the explicit representation of knowledge; and symbolic and subsymbolic representation and processing. When attempting to give a precise definition of Music-AI, it is argued that all musical processes must make some reference to human behaviour, and so Music-AI is a central rather than a peripheral discipline for musical computing. However, it turns out that the goals of Music-AI as first expressed, the mimicking of human behaviour, are impossible to achieve in full, and that it is impossible, in principle, for computers to pass a musical version of the Turing test. In practice, however, computers are used for their non-human-like behaviour just as much as their human-like behaviour, so the real goal of Mu...

Modeling Expressive Music Performance in Jazz

In this paper we describe a machine learning approach to one of the most challenging aspects of computer music: mod- eling the knowledge applied by a musician when perform- ing a score in order to produce an expressive performance of a piece. We apply machine learning techniques to a set of monophonic recordings of Jazz standards in order to induce both rules and a numeric model for expressive performance. We implement a tool for automatic expressive performance transformations of Jazz melodies using the induced knowl- edge.

The Emergent Musical Environments: An Artificial Life Approach

Proceedings of ECAL 2007 Workshop on Music and Artificial Life (MusicAL 2007), Lisbon, Portugal, 2007

Our research is aimed at investigating the genesis and development of musical styles in artificial worlds. Focusing on the analysis of piano improvisation, we have designed and implemented a computer system (Interactive Musical Environments - iMe) with which we analyse processes involved in music perception and cognition in order to evaluate how musical influence can lead to particular musical worldviews. iMe also entails interaction between software agents and human pianists playing improvised music. In this paper we discuss the main components and algorithms that comprise the system.

Artificial Intelligence and Musical Cognition [and Dicussion]

Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 1994

There has been much interest, in recent years, in the possibility of representing our musical faculties in computational terms. A necessary first step is to develop a formally precise theory of musical structure, and to this end, useful analogies may be drawn between music and natural language. Metrical rhythms resemble syntactic structures in being generated by phrase-structure grammars; as for the pitch relations between notes, the tonal intervals of Western music form a mathematical group generated by the octave, the fifth and the third. On this theoretical foundation one can construct AI programs for the transcription, editing and performance of classical keyboard music. A high degree of complexity and precision is required for the faithful representation of a sophisticated pianoforte composition, and to achieve a satisfactory level of performance it is essential to respect the minute variations of loudness and timing by which human performers reveal its hierarchical structure.

Human-like Music Synthesis

One of the major challenges in computer music is to endow the resulting piece with the expressiveness that characterizes human performances. The main problem is to grasp the performer's "touch"; that is, the knowledge brought about when performing a score. Humans acquire it through a long process of observation and imitation. For this reason, previous approaches, based on following musical rules trying to capture interpretation knowledge, have serious limitations to achieve human-like music synthesis. An alternative approach, much closer to the observation-imitation process observed in humans, is that of directly using the interpretation knowledge implicit in examples from recordings of human performers instead of trying to make explicit such knowledge. We report on a system, SaxEx, based on this alternative approach, capable of synthesizing truly human-like performances of Jazz ballads based on examples of human performers within a case-based reasoning system. The results we have obtained clearly show that it is perfectly possible to achieve human-like music synthesis by means of artificial intelligence techniques.