Skip to main content Skip to local navigation

York professor explores AI as creative partner in music making

When York University Associate Professor Doug Van Nort steps onto a stage, he isn’t just surrounded by musicians – he’s surrounded by collaborators, both human and non-human.

For more than a decade, Van Nort has been developing AI-driven machine partners, or machine agents, that improvise alongside performers. Much of that work has been done through the DisPerSion Lab, which he founded in 2015, to explore new modes of creative expression through technology.

There, rather than imagining a future where AI replaces human creativity, Van Nort says he is intent on challenging artists to think, feel and listen differently. 

“I always foreground that AI should be for creativity support, not creativity replacement,” says Van Nort, who teaches in the Department of Computational Arts, School of the Arts, Media, Performance & Design (AMPD) at York.

Doug Van Nort composing music with humans and machines
Doug Van Nort composing music with humans and machines.

In an era filled with alarm bells about automation and displacement, Van Nort’s vision is notably human-centred. “I remain steadfast in my interest because I’ve seen how these systems can inspire new directions for trained musicians and for people with no musical background at all. The goal is more creative engagement, not less.” 

Van Nort’s research in this area began when he was a grad students with a deceptively simple question: what does it mean to play a digital instrument?  

As his doctoral work unfolded, the question evolved into: what if the computer isn’t just an instrument, what if it is a partner? 

Improvisation, especially the open, exploratory form Van Nort practices, is already a complex social negotiation, he says. Players listen, respond, assert themselves and negotiate one another’s musical identities. Introducing a machine agent into this space doesn’t merely add a new sound. Rather, it reshapes the ecosystem, he says. 

Audience reaction to this type of music can be strong. “Just calling something AI or an agent sets people’s expectations. They start listening for its identity: What does it contribute? What are its edges? How does it change the group dynamic?”  

He explains that it’s these shifting relationships that fascinate him most. 

Although AI music research dates back decades, the last 10 years have brought what Van Nort calls “radical advancements.”  

Yet, in contrast to large-scale models trained on vast, untraceable internet data, his approach is intentionally intimate. One of his projects involves training machine agents on years of recordings from his own ensembles, specifically his professional group, thee Doug Van Nort Electro-Acoustic Orchestra

This curated dataset ensures ethical transparency. He knows every musician whose sound is being learned and fosters what he describes as “a deep relational quality” between the agents and the performers. 

He is also experimenting using a camera to track the gestures he uses to guide live improvisers. The same visual cues instruct the machine agents, creating a shared field of communication.  

“The humans see me. The machine sees me. They’re reacting to the same thing.” 

Additionally, he choreographs interactions between humans and machines: humans respond to AI gestures and vice versa, generating an ever-shifting conversational fabric. 

These explorations don’t stay confined to his research in The DisPerSion Lab. It also actively shapes Van Nort’s teaching at York. 

He leads the Electro Acoustic Orchestra (different from the project mentioned above) – a large ensemble of students that blends laptops, digital instruments, electronic processing and acoustic players. The group typically involves 25 to 35 students. Some are trained musicians, while others come from digital media or have never formally studied music at all. 

This diversity is intentional. “Democratizing music-making is near and dear to my heart,” Van Nort says.  

Attentive listening, not years of training, is the entry point. Students learn how to contribute meaningfully in a collaborative environment where sound, gesture and technology intertwine. 

Within this ensemble, Van Nort occasionally introduces machine agents to see how the group reacts, and how the AI learns to behave in a larger creative ensemble.  

He also incorporates “soundpainting,” a gestural vocabulary for composing live music. With a sweep of his hands, Van Nort can reshape an unfolding piece, cue performers or shift musical textures. When the AI agents respond to the same gestures, the boundary between composition, improvisation and programming dissolves. 

“The ensemble becomes a living organism,” he says. “Machines are part of that ecosystem.” 

While Van Nort is often the public face of this work, he emphasizes that his research is collaborative. Graduate students working in the lab – such as current PhD student Rory Hoy and former master’s student Kieran Maraj – have made important contributions to the systems he now uses in research and teaching, including code development and interface design. 

For Van Nort, AI isn’t about efficiency or optimization. It’s about creating the conditions for deeper expression. 

“How can you enrich your own creative voice through your own data, your own way of working?” he asks. It’s a question that applies not just to musicians, but to writers, artists and anyone experimenting with new tools of expression. 

With files from Karen Martin-Robbins

Features Latest News Research & Innovation Teaching & Learning

Tags: