PAW 2023
AI and Audio Programming Languages
Marie Curie Library, INSA Lyon (France)
Dec. 2, 2023

The Programmable Audio Workshop (PAW) is a yearly one day FREE event gathering members of the programmable audio community around scientific talks and hands-on workshops. The 2023 edition of PAW was hosted by the INRIA/INSA/GRAME-CNCM Emeraude Team at the Marie Curie Library of INSA Lyon (France) on December 2nd, 2023. The theme was "Artificial Intelligence and Audio Programming Languages" with a strong focus on computer music languages (i.e., Faust, ChucK, and PureData). The main aim of PAW-23 was to give an overview of the various ways artificial intelligence is used and approached in the context of Domain Specific Languages (DSL) for real-time audio Digital Signal Processing (DSP).


Program Overview

Morning: Talks

Amphithéâtre Émilie du Châtelet
09:00
Registration
09:20
Opening Speech
09:30
Machine Learning with Faust and JAX
David Braun
(Princeton University, USA)
10:10
PureData and AI
Miller Puckette
(University of California San Diego, USA)
10:50
Coffee Break
11:20
(X)AI in Live Coding Environments: Pandora’s Dream
Celeste Betancur
(Stanford University, USA)
12:00
AI and Music Composition
Benoît Carré
(Artiste, France)
12:40
Lunch Break

Afternoon: Workshops

Amphithéâtre Émilie du Châtelet
14:00
Faust and AI Workshop
David Braun
(Princeton University, USA)
15:00
PureData and AI Workshop
Miller Puckette
(University of California San Diego, USA)
16:00
Coffee Break
16:30
ChucK and AI Workshop
Celeste Betancur
(Stanford University, USA)
17:30
PARTAY!!

Program Details and Videos

09:20: Opening Speech

09:30: David Braun — Machine Learning with Faust and JAX

In most examples of modern machine learning, ML practitioners use Python to design complex mathematical models that can be auto-differentiated and then optimized via stochastic gradient descent in order to maximize some objective. Audio engineers, however, don't use Python because it lacks the elegant syntax and powerful libraries of an audio domain-specific language (DSL) such as Faust. We present a pipeline which is one of the first of its kind to bridge the gap between a library-rich audio DSL and a powerful auto-diff ML framework. This Faust-to-JAX pipeline allows audio engineers to auto-differentiate DSP functions that would have been too time-consuming to re-implement in Python or difficult to differentiate manually. Once Faust code is converted to JAX, the XLA compiler produces well-optimized code that scales well in cloud-computing systems. We present several early experiments showing our pipeline's potential to optimize audio-related objectives.

10:00: Miller Puckette — PureData and AI

Almost twenty years ago Davide Morelli introduced Pure Data bindings for the FANN machine learning library. The resulting Pd objects include a multi-layer perceptron object, ann_mlp, that can train and/or run MLPs natively on standard machine architectures (Intel or ARM). In this talk I'll give a demo of how to use ann_mlp to train additive synthesis models either to imitate existing sounds or to allow low-dimensional interpolation of user-supplied synthetic sounds. This work was inspired by work by Wessel and Lee from the 1990s, and also more recent work by Sam Pluta.

11:20: Celeste Betancur — (X)AI in Live Coding Environments: Pandora’s Dream

Pandora's Dream is a versatile live coding playground that opens up a world of possibilities integrating CHAI - Chuck for AI, openGL and the Chuck language. Pandora's Dream is a use case of all the new possibilities to use simple and explainable machine learning and AI models and integrate them into live performances. With this in mind, it is possible to analyze and extract audio features such as Chroma, MFCC, Centroid (and many others) and then train models such as KNN, HMM, SVN and MLP (also the new Wekinator object). In this case, importance is given to the complementary that the system can give to the performance and not to the algorithms itself. Actually, the features, algorithms, and data used are not the most advanced or sophisticated and in the end, the model is dependent on the performer's decisions. Pandora's Dream is centered in musical abstract data and not in the generation of audio at a sample by sample level. Finally, it is important to notice that all the training stages can be done and redone during the live performance to adjust, limit or expand the model.

12:00: Benoît Carré — AI and Music Composition

Artificial intelligence is useful in tools dedicated to the production of music, such as sound processing, the audio separation of instruments or virtual instruments like Inspired by Nature in Ableton Live. A.I. was in the news when musicians posted "fake Drake songs" online. Voice cloning technology is the latest, most visible and spectacular advance. But what are its limitations? Is A.I., already very powerful for voice generation, equally effective for composition? What does it lack to produce 8 really interesting bars from start to finish? What is the state of the art in text-2-music A.I.? What does it need to attract musicians en masse? And what about database annotation? What about the results it offers in the subsequent interaction? These are all questions I ask myself as I experiment with these tools. I'll share a few examples that illustrate my explorations, and we can discuss the limitations and potential they inspire.



14:00: David Braun — Faust and AI Workshop

Hands-on workshop following up on the tools presented during the morning session.

15:00: Miller Puckette — PureData and AI Workshop

Hands-on workshop following up on the tools presented during the morning session.

16:30: Celeste Betancur — Chuck and AI Workshop

Hands-on workshop following up on the tools presented during the morning session.


About the Speakers

David Braun

David Braun is a first-year Computer Science Ph.D. student at Princeton University. David has a prior background in software development of real-time computer graphics and audio. After graduating from Brown University in 2014, he used TouchDesigner to develop interactive art installations in his hometown of Chicago for Leviathan Design, which later became Envoy Inc. In 2020, David ventured into audio by entering the MST program at Stanford University in affiliation with the Center for Computer Research in Music & Acoustics. There he learned about machine learning and Faust. During his Ph.D. research, he hopes to remain an interdisciplinary researcher who uses and builds tools such as Faust that push the boundaries of creativity.

Homepage: https://dirt.design/portfolio/

Miller Puckette

Miller Puckette is known as the creator of the Max and Pure Data real-time computer music software environments. As an MIT student he won the Putnam mathematics competition in 1979. He received a PhD from Harvard University in 1986. He was a researcher at the MIT Media lab from its inception until 1986, then at IRCAM (Paris, France, where he is now a visiting researcher), and is distinguished professor emeritus at the University of California, San Diego. He has been a visiting professor at Columbia University and the Technical University of Berlin, and has won two honorary degrees, the SEAMUS award, and the 2023 Silver Lion of the Venice Biennale Musica.

Puckette has performed widely in venues including Centre Acanthes, Carnegie Hall, the Pulitzer Arts Foundation, the Ojai Music festival, Ars Electronica, and a cistern beneath Guanajuato, Mexico. His installation, Four Sound Portraits, was shown at the 2016 Kochi-Muziris Biennale.

Celeste Betancur

Celeste Betancur is a multi-instrumentalist musician with a professional degree in guitar from Berklee College of Music and a Master's in digital arts. She's currently working towards her PhD at the Center for Computer Research in Music and Acoustics (CCRMA) at Stanford University. She works at developing human-machine interfaces, especially designing programming tools and coding environments for musical expression. With these tools she performed live audiovisual sets around the world in more than 20 countries.

Benoît Carré

Benoit Carré, also known as SKYGGE, stands at the forefront of musicians who are exploring the creative possibilities of artificial intelligence (A.I.). Since 2015, he has been collaborating with research teams (Sony CSL, CTRL Spotify), assisting them in the development of A.I. prototypes specifically designed for musicians. His work serves as a bridge between innovation and the realm of pop music creation. His musical experiments with the prototypes give rise to artistic projects and new concepts, such as orchestrating a cappella folk songs from the American repertoire using a tool that is currently in the development phase.

2016: "Daddy’s Car" a song composed with Flow Machines in the style of The Beatles
2018: "Hello World" a collaborative album (including Stromae) making SKYGGE one of the first artists to create pop music entirely using A.I. technologie.
2019: "American Folk Song," five a cappella folk songs orchestrated with A.I.
2022: Benoit Carré, along with his partner Céline Garcia, co-founded the label Puppet Master. Under this label, his album "Melancholia" was released, and a live performance titled "Interface Poetry" was produced in collaboration with digital artists OYE.
2023: Duo with Grimes A.I.
https://linktr.ee/skyggemusic

Bonus Event: Faust/FPGA Workshop
on Dec. 1, 2023 @ CITI Lab (Lyon)

Experimenting With the Faust to FPGA Compilation Flow Using Grid5000

The goal of this hands-on workshop is to learn the fundamentals of FPGA programming for real-time audio Digital Signal Processing (DSP) applications. It focuses on the Syfala toolchain: the first open-source audio DSP compiler targeting FPGAs using the Faust programming language.

Field-Programmable Gate Arrays (FPGAs) provide unparalleled audio latency and computational power. In many cases, they are a better fit for real-time audio processing applications than "traditional" CPUs. These embedded platforms can easily handle audio DSP programs with hundreds of channels while guaranteeing very low latency. This allows for the design of systems with unmatched performances and unique features for spatial audio, noise cancelling, active control of room acoustics etc. However, programming FPGAs is very complex and out of reach to non-specialized engineers as well as to most people in the audio community. Syfala uses the Faust compiler as well as the Xilinx HLS tool to solve this issue and program Digilent Zybo FPGA boards. Given that Xilinx tools installation is very complex, we propose to use the Grid5000 platform which allows us to provide pre-configured remote machines with Xilinx tool and Syfala installed on them.

This workshop is associated with PAW 2023 which takes place the following day.

PLEASE NOTE: This workshop will imply non trivial hands-on tutorials including SSH connection to Grid5000. Please make sure to read the technical instructions and prerequisites below before registering. For any information please contact us (paw@grame.fr)

Practical Information

Program

Technical Instructions and Prerequisites

Although Syfala's goal is to facilitate FPGA programming for audio application, its use requires technical skills in programming and UNIX systems.