I am broadly interested in the design of interactive intelligent systems to extend human musical creation and expression. This research lies in the intersection of Machine Learning, HCI, Robotics, and Computer Music. My representative works include interactive composition via style transfer, human-computer interactive performances, _autonomous dancing robot_s, large-scale content-based music retrieval, haptic guidance for flute tutoring, and bio-music computing using slime mold.
Particularly, I build music agents that compose and arrange music via style transfer and analogy, perform and improvise music expressively in concert with human musicians by learning from rehearsal experience, and tutor music beginners using haptic guidance. These efforts unify expressive performance rendering, automatic accompaniment, and algorithmic composition in a machine-learning framework, making music a more accessible and friendly tool for everyone.
I am an Assistant Professor in Machine Learning at MBZUAI. I received my Ph.D. in the Machine Learning Department at Carnegie Mellon University where I study Machine Learning and Computer Music under the advice of Prof. Roger Dannenberg. I was a Neukom Fellow at Dartmouth from 2016 to 2017. In 2010, I received my undergraduate degree in Information Science with a minor in Psychology at Peking University. I am also a professional DI and XIAO (Chinese flute and vertical flute) player. I was the prime soloist of the Chinese Music Institute (CMI) of Peking University, where I also served as the president and assistant conductor. I held my solo concert in 2010 and a Music AI concert in 2022. (See Music Events)
An Outline of Main Publication Classification (organized according to topic and story lines. See here for a complete list).
Big Picture: [Seminar Discussion, Interpretability in LLMs and "Functional Alignment": video and note] Foundation Models and Benchmarks: [MERT][MARBLE] Controlling and Fine-tuning Music LLMs: [Loop Copilot][Coco-Mulla]