I Keep Counting: An Experiment in Human/AI Co-creative Songwriting (original) (raw)
Related papers
AI Song Contest: Human-AI Co-Creation in Songwriting
2020
Machine learning is challenging the way we make music. Although research in deep generative models has dramatically improved the capability and fluency of music models, recent work has shown that it can be challenging for humans to partner with this new class of algorithms. In this paper, we present findings on what 13 musician/developer teams, a total of 61 users, needed when co-creating a song with AI, the challenges they faced, and how they leveraged and repurposed existing characteristics of AI to overcome some of these challenges. Many teams adopted modular approaches, such as independently running multiple smaller models that align with the musical building blocks of a song, before re-combining their results. As ML models are not easily steerable, teams also generated massive numbers of samples and curated them post-hoc, or used a range of strategies to direct the generation, or algorithmically ranked the samples. Ultimately, teams not only had to manage the "flare and fo...
Human-AI Co-Creation in Songwriting
2020
Machine learning is challenging the way we make music. Although research in deep generative models has dramatically improved the capability and fluency of music models, recent work has shown that it can be challenging for humans to partner with this new class of algorithms. In this paper, we present findings on what 13 musician/developer teams, a total of 61 users, needed when co-creating a song with AI, the challenges they faced, and how they leveraged and repurposed existing characteristics of AI to overcome some of these challenges. Many teams adopted modular approaches, such as independently running multiple smaller models that align with the musical building blocks of a song, before re-combining their results. As ML models are not easily steerable, teams also generated massive numbers of samples and curated them post-hoc, or used a range of strategies to direct the generation or algorithmically ranked the samples. Ultimately, teams not only had to manage the ``flare and focus&#...
Editorial: JCMS Special Issue of the first Conference on AI Music Creativity
Journal of Creative Music Systems
The International conference on AI Music Creativity (AIMC, https://aimusiccreativity.org/) is the merger of the international workshop on Musical Metacreation MUME (https://musicalmetacreation.org/) and the conference series on Computer Simulation of Music Creativity (CSMC, https://csmc2018.wordpress.com/). This special issue gathers selected papers from the first edition of the conference along with paper versions of two of its keynotes.This special issue contains six papers that apply novel approaches to the generation and classification of music. Covering several generative musical tasks such as composition, rhythm generation, orchestration, as well as some machine listening task of tempo and genre recognition, these selected papers present state of the art techniques in Music AI. The issue opens up with an ode on computer Musicking, by keynote speaker Alice Eldridge, and Johan Sundberg's use of analysis-by-synthesis for musical applications.
Creativity in machines: Music composition using artificial intelligence
ASIAN JOURNAL OF CONVERGENCE IN TECHNOLOGY, 2020
In this paper we propose a framework to take the next step towards making creative machines. Taking cue from Turing's Mind Paper (1950) to more recent studies by Riedl in ''The Lovelace 2.0 test of artificial creativity and intelligence' we try to examine a very creative area of human creativitymusic. We have summarized the different works published on artificial intelligence and machine learning implemented for algorithmic music composition. Comparison of different algorithms-techniques including key features, advantages, disadvantages, common issues, trade-off and future aspects are discussed in detail. We then propose our own framework of how machines can be made to learn creativity.
Harmony in Synthesis: Exploring Human - AI Collaboration in Music
IRJCS:: AM Publications,India, 2024
The nexus between artificial intelligence (AI) and human creativity offers a fascinating paradigm shift in the dynamic field of music composition. To understand the effects on musical composition, production, and performance, this research study, "Harmony in Synthesis: Exploring Human-AI Collaboration in Music," explores the dynamic interplay between human artists and AI systems.The opening establishes the scene by describing the development of AI in the music business and emphasizing the revolutionary possibilities of teamwork. This study addresses current knowledge through a thorough literature assessment, pointing out gaps that our research aims to remedy and adding to the conversation on AI's involvement in creative processes.
Artificial Computational Creativity based on Collaborative Intelligence in Music
AImc 2021, 2021
In this paper, I will propose a series of Artificial Computer Creativity (ACC) techniques based on Collaborative Intelligence from a multidisciplinary approach. The common thread here are some reflections on the Turing Test (TT) that will inspire alternative metrics of validation. I will propose Collaborative Intelligence (CI) techniques as an expansion of anthropocentric ACC by: replacing the idea of imitation in its basis with playing a game, using selfreferentiality and circularity between the generative and the validation processes; having hybrid man-machine networks; incorporating algorithms that function as mediators of the nodes in hybrid networks avoiding centralities and by integrating self-referential metrics in the works themselves. Finally, I will show how these techniques have been used in a set of works.
Human-AI Musicking: A Framework for Designing AI for Music Co-creativity
Zenodo (CERN European Organization for Nuclear Research), 2023
In this paper, we present a framework for understanding human-AI musicking. This framework prompts a series of questions for reflecting on various aspects of the creative interrelationships between musicians and AI and thus can be used as a tool for designing creative AI systems for music. AI is increasingly being utilised in sonic arts and music performance, as well as digital musical instrument design. Existing works generally focus on the theoretical and technical considerations needed to design such systems. Our framework adds to this corpus by employing a bottom-up approach, as such it is built using an embodied and phenomenological perspective. With our framework, we put forward a tool that can be used to design, develop, and deploy creative AI in ways that are meaningful to musicians, from the perspective of musicking (doing music). Following a detailed introduction to the framework, we then introduce the four case studies that were used to refine and validate it, namely, a breathing guitar, a biosensing director AI, a folk-melody generator, and a realtime co-creative robotic score. Each of these is at different stages of development, ranging from ideation, through prototyping, into refinement, and finally, evaluation. Additionally, each design case also presents a distinct mode of interaction based on a continuum of human-AI interaction, which ranges from creation tool to co-creative agent. We then present reflection points based on our evaluation of using, challenging, and testing the framework with active projects. Our findings warrant future widespread application of this framework in the wild.
Frontiers in Psychology
CHAMELEON is a computational melodic harmonization assistant. It can harmonize a given melody according to a number of independent harmonic idioms or blends between idioms based on principles of conceptual blending theory. Thus, the system is capable of offering a wealth of possible solutions and viewpoints for melodic harmonization. This study investigates how human creativity may be influenced by the use of CHAMELEON in a melodic harmonization task. Professional and novice music composers participated in an experiment where they were asked to harmonize two similar melodies under two different conditions: one with and one without computational support. A control group harmonized both melodies without computational assistance. The influence of the system was examined both behaviorally, by comparing metrics of user-experience, and in terms of the properties of the artifacts (i.e., pitch class distribution and number of chord types characterizing each harmonization) that were created ...
Automatical Composition of Lyrical Songs
We address the challenging task of automatically composing lyrical songs with matching musical and lyrical features, and we present the first prototype, M.U. Sicus-Apparatus, to accomplish the task. The focus of this paper is especially on generation of art songs (lieds). The proposed approach writes lyrics first and then composes music to match the lyrics. The crux is that the music composition subprocess has access to the internals of the lyrics writing subprocess, so the music can be composed to match the intentions and choices of lyrics writing, rather than just the surface of the lyrics. We present some example songs composed by M.U. Sicus, and we outline first steps towards a general system combining both music composition and writing of lyrics.