AI Song Contest: Human-AI Co-Creation in Songwriting (original) (raw)

Human-AI Co-Creation in Songwriting

2020

Machine learning is challenging the way we make music. Although research in deep generative models has dramatically improved the capability and fluency of music models, recent work has shown that it can be challenging for humans to partner with this new class of algorithms. In this paper, we present findings on what 13 musician/developer teams, a total of 61 users, needed when co-creating a song with AI, the challenges they faced, and how they leveraged and repurposed existing characteristics of AI to overcome some of these challenges. Many teams adopted modular approaches, such as independently running multiple smaller models that align with the musical building blocks of a song, before re-combining their results. As ML models are not easily steerable, teams also generated massive numbers of samples and curated them post-hoc, or used a range of strategies to direct the generation or algorithmically ranked the samples. Ultimately, teams not only had to manage the ``flare and focus&#...

I Keep Counting: An Experiment in Human/AI Co-creative Songwriting

2021

Musical co-creativity aims at making humans and computers collaborate to compose music. As an MIR team in computational musicology, we experimented with co-creativity when writing our entry to the "AI Song Contest 2020". Artificial intelligence was used to generate the song's structure, harmony, lyrics, and hook melody independently and as a basis for human composition. It was a challenge from both the creative and the technical point of view: in a very short time-frame, the team had to adapt its own simple models, or experiment with existing ones, to a related yet still unfamiliar task, music generation through AI. The song we propose is called "I Keep Counting". We openly detail the process of songwriting, arrangement, and production. This experience raised many questions on the relationship between creativity and machine, both in music analysis and generation, and on the role AI could play to assist a composer in their work. We experimented with AI as automation, mechanizing some parts of the composition, and especially AI as suggestion to foster the composer's creativity, thanks to surprising lyrics, uncommon successions of sections and unexpected chord progressions. Working with this material was thus a stimulus for human creativity.

Harmony in Synthesis: Exploring Human - AI Collaboration in Music

IRJCS:: AM Publications,India, 2024

The nexus between artificial intelligence (AI) and human creativity offers a fascinating paradigm shift in the dynamic field of music composition. To understand the effects on musical composition, production, and performance, this research study, "Harmony in Synthesis: Exploring Human-AI Collaboration in Music," explores the dynamic interplay between human artists and AI systems.The opening establishes the scene by describing the development of AI in the music business and emphasizing the revolutionary possibilities of teamwork. This study addresses current knowledge through a thorough literature assessment, pointing out gaps that our research aims to remedy and adding to the conversation on AI's involvement in creative processes.

Machine Learning Research that Matters for Music Creation: A Case Study

Journal of New Music Research, 2018

Research applying machine learning to music modeling and generation typically proposes model architectures, training methods and datasets, and gauges system performance using quantitative measures like sequence likelihoods and/or qualitative listening tests. Rarely does such work explicitly question and analyse its usefulness for and impact on real-world practitioners, and then build on those outcomes to inform the development and application of machine learning. This article attempts to do these things for machine learning applied to music creation. Together with practitioners, we develop and use several applications of machine learning for music creation, and present a public concert of the results. We reflect on the entire experience to arrive at several ways of advancing these and similar applications of machine learning to music creation.

Human-AI Musicking: A Framework for Designing AI for Music Co-creativity

Zenodo (CERN European Organization for Nuclear Research), 2023

In this paper, we present a framework for understanding human-AI musicking. This framework prompts a series of questions for reflecting on various aspects of the creative interrelationships between musicians and AI and thus can be used as a tool for designing creative AI systems for music. AI is increasingly being utilised in sonic arts and music performance, as well as digital musical instrument design. Existing works generally focus on the theoretical and technical considerations needed to design such systems. Our framework adds to this corpus by employing a bottom-up approach, as such it is built using an embodied and phenomenological perspective. With our framework, we put forward a tool that can be used to design, develop, and deploy creative AI in ways that are meaningful to musicians, from the perspective of musicking (doing music). Following a detailed introduction to the framework, we then introduce the four case studies that were used to refine and validate it, namely, a breathing guitar, a biosensing director AI, a folk-melody generator, and a realtime co-creative robotic score. Each of these is at different stages of development, ranging from ideation, through prototyping, into refinement, and finally, evaluation. Additionally, each design case also presents a distinct mode of interaction based on a continuum of human-AI interaction, which ranges from creation tool to co-creative agent. We then present reflection points based on our evaluation of using, challenging, and testing the framework with active projects. Our findings warrant future widespread application of this framework in the wild.

Novice-AI Music Co-Creation via AI-Steering Tools for Deep Generative Models

Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems

While generative deep neural networks (DNNs) have demonstrated their capacity for creating novel musical compositions, less attention has been paid to the challenges and potential of co-creating with these musical AIs, especially for novices. In a needfinding study with a widely used, interactive musical AI, we found that the AI can overwhelm users with the amount of musical content it generates, and frustrate them with its nondeterministic output. To better match co-creation needs, we developed AI-steering tools, consisting of Voice Lanes that restrict content generation to particular voices; Example-Based Sliders to control the similarity of generated content to an existing example; Semantic Sliders to nudge music generation in high-level directions (happy/sad, conventional/surprising); and Multiple Alternatives of generated content to audition and choose from. In a summative study (N=21), we discovered the tools not only increased users' trust, control, comprehension, and sense of collaboration with the AI, but also contributed to a greater sense of self-efficacy and ownership of the composition relative to the AI.

Editorial: JCMS Special Issue of the first Conference on AI Music Creativity

Journal of Creative Music Systems

The International conference on AI Music Creativity (AIMC, https://aimusiccreativity.org/) is the merger of the international workshop on Musical Metacreation MUME (https://musicalmetacreation.org/) and the conference series on Computer Simulation of Music Creativity (CSMC, https://csmc2018.wordpress.com/). This special issue gathers selected papers from the first edition of the conference along with paper versions of two of its keynotes.This special issue contains six papers that apply novel approaches to the generation and classification of music. Covering several generative musical tasks such as composition, rhythm generation, orchestration, as well as some machine listening task of tempo and genre recognition, these selected papers present state of the art techniques in Music AI. The issue opens up with an ode on computer Musicking, by keynote speaker Alice Eldridge, and Johan Sundberg's use of analysis-by-synthesis for musical applications.

Collaborative Artificial Intelligence in Music Production

Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion

The use of technology has revolutionized the process of music composition, recording, and production in the last 30 years. One fusion of technology and music that has been longstanding is the use of artificial intelligence in the process of music composition. However, much less attention has been given to the application of AI in the process of collaboratively composing and producing a piece of recorded music. The aim of this project is to explore such use of artificial intelligence in music production. The research presented here includes discussion of an auto ethnographic study of the interactions between songwriters, with the intention that these can be used to model the collaborative process and that a computational system could be trained using this information. The research indicated that there were repeated patterns that occurred in relation to the interactions of the participating songwriters. CCS CONCEPTS • Applied computing~Sound and music computing • Humancentered computing~Empirical studies in collaborative and social computing • Computing methodologies~Artificial intelligence

Cococo: AI-Steering Tools for Music Novices Co-Creating with Generative Models

2020

In this work1, we investigate how novices co-create music with a deep generative model, and what types of interactive controls are important for an effective co-creation experience. Through a needfinding study, we found that generative AI can overwhelm novices when the AI generates too much content, and can make it hard to express creative goals when outputs appear to be random. To better match co-creation needs, we built Cococo, a music editor web interface that adds interactive capabilities via a set of AIsteering tools. These tools restrict content generation to particular voices and time measures, and help to constrain non-deterministic output to specific high-level directions. We found that the tools helped users increase their control, self-efficacy, and creative ownership, and we describe how the tools affected novices’ strategies for composing and managing their interaction with AI.