About PAW 2021, December 18, Lyon [videos]

Programmable Audio Workshop on Procedural Audio : instead of playing recorded sounds in video games, Procedural Audio is about producing sounds algorithmically through real-time synthesis methods. These models can be controlled by various parameters, for example from the game engine in relation to the player's actions. The 6 talks and 4 workshops of PAW 2021, will offer a unique opportunity to discover Procedural Audio in relation with video game engines!

PAW is completely free, but the number of seats is limited. Register as soon as possible at PAW 2021 REGISTRATION.
Special thanks to Gabriel Malgouyard for his help in preparing this edition of PAW.

[All video recordings of the PAW 2021 talks and workshops are now available on YouTube]

Event Location

UCLy-Université Catholique de Lyon
Campus Saint-Paul
10 Place des Archives, 69002 Lyon


Program Overview

Morning: Talks

Amphithéâtre R. Mouterde
09:00
Registration
09:30
Welcome
Yann Orlarey
(Scientific Director, GRAME CNCM -- Lyon, France)
09:50
Audio in Unreal with Faust
Ethan Geller
(Audio Research Software Engineer, Meta/Facebook, USA)
10:10
Video Game Engine as Media Creation Swiss Army Knife
Michael Bolufer and Gabriel Malgouyard
(Technical Director at Plip! Animation and Studio Manette and Software Artisan, Realtime Multimedia Projects.
10:30
Coffee Break
11:00
Physics Based Sonic Interactions
Stefania Serafin
(Professor of Sonic Interaction Design, Aalborg University)
11:20
Virtual Musical Instrument Design for the 21st Century
Rob Hamilton
(Head of the Department of Arts, Rensselaer Polytechnic Institute, USA)
11:40
Mesh2faust: From 3D Meshes to Faust Physical Models
Romain Michon
(Faculty Researcher at INRIA (Emeraude team), Associate Researcher at GRAME)
12:00
Spatial audio in Unity for an interactive immersive space
David-Alexandre Chanel
(THEORIZ / Augmenta, Lyon, France)
12:20
Lunch Break

Afternoon: Workshops

Amphithéâtre R. Mouterde
12:20
Lunch Break
14:00
Procedural Audio with Faust
Romain Michon
(Faculty researcher at INRIA, Associate Researcher at GRAME)
15:00
Physics Based Sonic Interactions in Practice
Stefania Serafin
(Professor of Sonic Interaction Design, Aalborg University)
16:00
Coffee Break
16:30
Procedurally Spawned Faust Synths in Unreal
Ethan Geller
(Audio Research Software Engineer, Meta/Facebook, USA)
17:30
Building Interactive Procedural Music Systems for Unreal Engine 4
Rob Hamilton
(Head of the Department of Arts, Rensselaer Polytechnic Institute, USA)
18:30
END

Program Details

09:00-09:30, Registration

Morning Talks (Amphithéâtre R. Mouterde)

09:30-09:50, Welcome

Yann Orlarey (Scientific Director, GRAME CNCM -- Lyon, France)

09:50-10:10, Audio in Unreal with Faust [video]

Ethan Geller (Audio Research Software Engineer, Meta/Facebook, USA)

An introduction on audio in Unreal and how it could interact with Faust.

10:10-10:30, Video Game Engine as Media Creation Swiss Army Knife [video]

Michael Bolufer (Technical director at Plip! Animation and Studio Manette) and Gabriel Malgouyard (Software Artisan, Realtime Multimedia Projects.)

From fully featured AAA game engine to open source tools libraries, video game tools of all kinds now constitute a rich and varied ecosystem. Many of these tools can be diverted from their original usage and relied upon as a basis for realtime media creation and especially audio. In the animation studio "Studio Manette", it is daily demonstrated with their production pipeline fully based on typical video game patterns and tools.

10:30-11:00, Coffee Break

11:00-11:20, Physics Based Sonic Interactions [video]

Stefania Serafin (Professor of Sonic Interaction Design, Aalborg University)

In this talk I will provide an overview of the physics based simulations that we have developed in recent years at the Multisensory Experience Lab at Aalborg University in Copenhagen, with applications on cultural heritage and technologies for people in need.

11:20-11:40, Virtual Musical Instrument Design for the 21st Century [video]

Rob Hamilton (Head of the Department of Arts, Rensselaer Polytechnic Institute, USA)

Over the past decade, consumer-based virtual reality hardware and software platforms have become significantly more affordable and accessible to the general public. At the same time, virtual and augmented reality development toolkits grounded in game and mobile application design paradigms have similarly become more user friendly, opening up opportunities for artists, musicians and creatives of all backgrounds to help shape the look, feel and, perhaps most importantly, the sound of the ‘metaverse’ writ large. This talk will explore the role of musicians and digital luthiers alike in the field of Musical XR, targeting the creation and composition of dynamic and procedural real-time musical instruments and systems.

11:40-12:00, Mesh2faust: From 3D Meshes to Faust Physical Models [video]

Romain Michon (Faculty Researcher at INRIA (Emeraude team), Associate Researcher at GRAME)

Tools like faust2unity greatly facilitate the use of Faust-written synthesizers and audio effects in Virtual Reality environments for procedural audio applications. Tighter connections can be established between what is seen and what is heard in these environments through the use of physical modeling. For instance, mesh2faust can convert a 3D mesh designed in any CAD software (i.e., SolidWorks, Blender, Rhino, OpenSCAD, etc.) or VR environment into a modal physical model implemented in Faust through finite element analysis. In this presentation, after giving some general background on the use of physical modeling in the context of procedural audio and VR, we demonstrate how 3D graphical objects can be turned into ready-to-use audio physical models using mesh2faust.

12:00-12:20, Spatial audio in Unity for an interactive immersive space [video]

David-Alexandre Chanel (THEORIZ / Augmenta, Lyon, France)

David-Alexandre will talk about the challenges and constraints of designing and implementing real time spatialized sound in the Unity engine.

12:20-14:00, Lunch Break

Afternoon Workshops

14:00-15:00, Procedural Audio with Faust [video]

Romain Michon (Faculty Researcher at INRIA (Emeraude team), Associate Researcher at GRAME)

The workshop is a practical introduction to procedural audio and Faust programming. Different techniques of sound synthesis will be explored using in particular the new Faust physical modeling libraries. The workshop does not require the installation of any software except a recent web browser like Firefox or Chrome. All examples will be programmed directly in the web using the Faust IDE (https://faustide.grame.fr).

15:00-16:00, Physics Based Sonic Interactions in Practice [video]

Stefania Serafin (Professor of Sonic Interaction Design, Aalborg University)

In the last decades several physical modelling techniques have been developed, such as waveguide models, mass-spring simulations, modal synthesis and finite difference schemes, to name a few. Moreover, these techniques have already been implemented in different software platforms such as Max, Faust, Juce, Super Collider, as well as commercial products such as SWAM by Audio Modelling. In this workshop we will look at recent developments in modelling musical instruments, discussing advantages and disadvantages of the different techniques. We will examine available tools and choose one case study to examine in depth.

16:00-16:30, Coffee Break

16:30-17:30, Procedurally Spawned Faust Synths in Unreal [video]

Ethan Geller (Audio Research Software Engineer, Meta/Facebook, USA)

In this presentation, we are going to add a .dsp object as an audio source in Unreal, procedurally spawn instances of it, and procedurally set parameters on those instances to create a diffuse, procedurally generated audiovisual experience. Before this presentation starts it is recommended that you install Unreal Engine 5 and have some way to compile `faust2api` on your computer.

17:30-18:30, Building Interactive Procedural Music Systems for Unreal Engine 4 [video]

Rob Hamilton (Head of the Department of Arts, Rensselaer Polytechnic Institute, USA)

For artists and designers seeking to build software-based dynamic and procedural audio and music systems, Epic Games’ Unreal 4 gaming engine offers a robust and battle-tested codebase optimized for real-time multi-user interaction. Capable of rendering visual assets with astonishing clarity and realism, Unreal 4 also boasts native Open Sound Control (OSC) support, as well as a suite of native procedural audio components for real-time audio synthesis. This workshop will explore the use of Unreal Engine 4 for building interactive musical systems using Unreal’s Blueprint workflow programming environment, OSC, and Unreal’s own synthesis components.


Speakers

Michaël Bolufer

Michaël Bolufer, director, writer, technical director at Plip! Animation and Studio Manette.Michaël has been working for almost 20 years between the animation and video game industries. In this "industrial in-between", he created in 2016 "Mr.Carton", the first series made with a game engine. Since then, he has participated in many series and VR film projects as a "real-time" technical art director.In 2020 he co-founded with Thibault Noyer the production company Plip! Animation and Studio Manette (in association with Caribara Animation), a studio specialized in the production of real-time films. He also continues to write and direct.

David-Alexandre Chanel

David-Alexandre Chanel is the co-founder of multi-award winning art & technology studio THEORIZ and the creative tracking technology company Augmenta. With a background of music studies in conservatoire and signal processing research work, David-Alexandre has been creative director and RnD director at THEORIZ for ambitious digital art projects involving advanced tech, generative visuals and music for ten years.

Ethan Geller

Ethan Geller met Romain, Yann and Stephane in 2013 and has considered Faust his favorite audio programming language ever since. Currently, he is a software engineer in the Reality Labs Audio Research division of Meta. Previously he has worked on audio render systems for Fortnite, Unreal Engine, Dolby Laboratories and Playstation. Ethan has also written chapters on spatial audio topologies and multithreaded audio processing for Game Audio Programming 2 and Game Audio Programming 3.

Rob Hamilton

Composer and researcher Rob Hamilton explores the converging spaces between sound, music and interaction. His creative practice includes mixed and virtual-reality performance works built within fully rendered networked game environments, procedural music engines and mobile musical ecosystems. His research focuses on the cognitive implications of sonified musical gesture and motion and the role of perceived space in the creation and enjoyment of sound and music. Dr. Hamilton received his Ph.D. from Stanford University’s Center for Computer Research in Music and Acoustics (CCRMA) and currently serves as Associate Professor of Music and Media and Head of the Department of Arts at Rensselaer Polytechnic Institute in Troy, NY, USA.

Gabriel Malgouyard

Gabriel Malgouyard is a software artisan, contributing to real-time multimedia projects. From sound design tools crafters LeSound to AAA video games studio such as Arkane, he learned the trade of game engines - Unity, Unreal and others - by taking part in traditional video games and VR pieces development. His interest now lies in widening the scope of these tools as actual media creation platforms to reach new domains, from films production to interactive educational models and more.

Romain Michon

Romain Michon is faculty researcher at INRIA in the Emeraude team, associate researcher at GRAME -- Centre National de Création Musicale in Lyon (France), and a lecturer at the Center for Computer Research in Music and Acoustics (CCRMA) at Stanford University (USA). He has been involved in the development of the Faust programming language since 2008 and he's part of the core Faust development team at GRAME. Besides that, Romain's research interests involve embedded systems for real-time audio processing, Human Computer Interaction (HCI), New Interfaces for Musical Expression (NIME), and physical modeling of musical instruments. He is currently leading the FAST project (https://fast.grame.fr) on facilitating the programming of FPGA platforms for real-time audio signal processing applications towards ultra-low latency.

Stefania Serafin

Stefania Serafin is professor of Sonic interaction design at Aalborg University in Copenhagen. She is the President of the Sound and Music Computing association, Project Leader of the Nordic Sound and Music Computing network and lead of the Sound and music computing Master at Aalborg University. Stefania received her PhD entitled “The sound of friction: computer models, playability and musical applications” from Stanford University in 2004, supervised by Professor Julius Smith III. Her research on sonic interaction design, sound for virtual and augmented reality with applications in health and culture can be found here

Registration/Contact

Participants must register online: PAW 2021 REGISTRATION. Registration is free within the limit of available seats.

Feel free to send your questions to paw_at_grame_dot_fr.