Club Concert 1C
Immerse yourself in a 20.8-channel sound world: in the Production Lab of the Ligeti Center, neural synthesis, artificial intelligence, and interactive visuals merge into an immersive live experience. International artists present innovative prototypes—from AI-augmented string instruments to dynamic graphic scores.
Program Overview
Zwischenheit
Riccardo Ancona
Knitting
Brian Lindgren
Sonic Memories: A Live Coding Performance with Machine-Learned Sound Fragments
Riccardo Mazza
Gradient Noise: Animated Scores with Corresponding Data Streams
John C.S. Keston
Fluid Ontologies
Nicola Leonard Hein and Viola Yip
On The Edge
Kasey Pocius
Scarittera – Subterranean Eruptions of Sonic Memory
Danilo Randazzo
About the pieces & artists
Ricardo Ancona: Zwischenzeit
Cosmologies 3 situates the listener inside a virtual grand piano to experience its secret inner life. The piano interior, recorded with a spherical microphone array, is complemented by three-dimensional (3-D) field recordings of Paris’s Place Igor Stravinsky. These recordings are highlighted and underlined with computer synthesis using artificial intelligence (AI) to reproduce the spatial presence of acoustic instruments, while the microcosm of the piano’s inner space expands larger-than-life.
Cosmologies 3 is part of a modular series of works that use AI to inform sound spatialization. The situated spatial presence of musical instruments has been well studied in the fields of acoustics and music perception research, but so far has not been the focus of research on AI and music. Cosmologies seeks to “re-embody” recorded sound using data derived from natural acoustic phenomena in an immersive sonic environment where real and virtual sources blend seamlessly. Cosmologies 3 for Ambisonic fixed media may be performed on its own or directly following Cosmologies for piano and 3-D electronics, with the fixed media work beginning as the live performer leaves the stage. Although the human–AI interaction in the fixed work is no longer live, it remains as a trace of the work’s creation process, refracting the human performer’s presence behind the spatial audio recordings (see Fig. 1).
Cosmologies is among the first works to connect audio descriptor analysis and corpus-based syn- thesis to 3-D spatialization using Higher-Order Ambisonics (HOA) and machine learning (ML). At the same time, it is the first project connecting the computer programs Max, Python, and OM# (Bresson et al. 2017) with the associated packages Spat (Carpentier 2018) and Mubu (Schnell et al. 2009). These software tools are used to draw upon natural acoustic phenomena as source material for spatial sound derived from two sources: one is a 3-D microphone array, the EM32 Eigenmike by mh acoustics (https://mhacoustics.com/products), a 32-channel array used to capture 3-D piano samples as well as ambient field recordings. The other source is generative spatial sound synthesis produced through ML of an existing large database of radiation measurements for acoustic instruments (Shabtai et al. 2017; Weinzierl et al. 2017). This database serves as a training set for ML models to control spatially rich 3-D patterns for electronic synthesis. These two sources of spatial sound are intentionally overlapped and fused so the listener cannot easily distinguish or segregate the sources. The aesthetic goal is to create a setting for curious and detailed listening, where one may not discern the “sleight of hand” between the superposed 3-D spaces of the sample recordings and computer synthesis.
About the artist
Aaron Einbond’s work explores the intersection of instrumental music, field recording, sound installation, and interactive technology. He released portrait albums Cosmologies with the Riot Ensemble, Without Words with Ensemble Dal Niente, and Cities with Yarn/Wire and Matilde Meireles. His awards include a Giga-Hertz Förderpreis, a Guggenheim Fellowship, and artistic-research residencies at IRCAM and ZKM. He teaches music composition and technology at City St George’s, University of London.
Brian Lindgren: Knitting
Knitting is a new work for the EV, an augmented bowed string instrument that integrates IRCAM’s RAVE (Realtime Audio Variational autoEncoder) neural synthesis model. The composition explores how machine learning can extend the timbral vocabulary of a traditional gestural practice—not by imposing external sonic material, but by folding the instrument’s own acoustic identity back through a neural lens.
The EV combines a 3D-printed body with four infrared optical pickups whose signals are processed by a Bela board and transmitted to a laptop running Pure Data. Each string controls an independent synthesis engine comprising convolution, physical modeling, granular processing, reverb, and ambisonic spatialization. The recent addition of RAVE introduces a self-referential pathway: the model was trained on four hours of the EV’s own recordings, creating a system that listens to itself through learned representations of its sonic history.
Central to this integration is a control strategy that maps performance descriptors—fundamental frequency, amplitude, and spectral centroid—to specific dimensions of the model’s eight-dimensional latent space. By constraining each modulation source to a single latent dimension, the relationship between gesture and neural response becomes legible: a shift in bow pressure or position translates into a navigable timbral trajectory rather than an opaque transformation. This approach distinguishes the EV from other RAVE-integrated instruments, which often emphasize loop-based or tabletop interfaces rather than continuous bowed-string control.
Knitting treats this latent space as a landscape of sonic possibility, each dimension a potential resonance between physical gesture and synthesized response. The compositional process is less one of arranging fixed materials than of cultivating emergent textures—drawing out sonic filaments, crossing and interlacing them, balancing tensions across the tapestry. The neural model functions as a meta-resonator: a parallel pathway that refracts the instrument’s timbral identity through an alternate causal route, revealing aspects of its sound that remain latent in conventional electroacoustic processing.
The work demonstrates how neural synthesis can be embedded within a hybrid instrument ecology, extending expression beyond pitch and amplitude to make performance descriptors direct agents of timbral transformation. By grounding latent navigation in the acoustic features of bowed-string technique, Knitting positions machine learning not as a replacement for embodied practice but as an expansion of its expressive range.
About the artist
Brian Lindgren (1983) is a composer, researcher, violist, and instrument builder whose work explores the convergence of acoustic performance and digital synthesis through the EV, a hybrid string instrument integrating lutherie and embedded computing.
His compositions and research have been featured at the International Computer Music Conference (ICMC), New Interfaces for Musical Expression (NIME) conference, Conference on Neural Information Processing Systems (NeurIPS), Society for Electro-Acoustic Music in the United States (SEAMUS), IRCAM Forum, and International Conference on Auditory Display (ICAD), as well as published in Organised Sound. His work has been performed by ensembles including HYPERCUBE, LINÜ, Popebama, and Tokyo Gen’on Project.
The EV was a finalist in the 2026 Guthman Musical Instrument Competition and used to compose ‘two tales from the shadows of the grid’ which won first place at the IEEE Big Data 2025 3rd Workshop on AI Music Generation Competition.
Lindgren holds an MFA in Sonic Arts from Brooklyn College (Subotnick, Geers, Gimbrone), a BA from the Eastman School of Music (Graham), and is pursuing a PhD at the University of Virginia (Burtner).
Riccardo Mazza: Sonic Memories: A Live Coding Performance with Machine-Learned Sound Fragments
Drawing from Henri Bergson’s concept of *durée* and Deleuze’s rhizomatic models, “Sonic Memories” reimagines memory not as a linear chronological archive, but as a stratified field of coexisting planes. In this live coding performance, autobiographical sound fragments—from mechanical gears to lagoon soundscapes and fragile voices—are liberated from their timeline and reorganized by an autoencoder into a non-hierarchical, navigable map.
The performance begins with the simple act of loading a personal audio file—a field recording from a journey, a voice memo, a musical fragment—into a computational system that immediately begins to analyze and reorganize these sonic memories according to its own logic.
On stage, the audience sees everything: the code acting in real-time, a visual map where memories become points in space, oscilloscopes showing the transformation of sound waves. This transparency is essential—there is no mystification of the technological process, but rather an invitation to witness the negotiation between human remembering and algorithmic interpretation.
The performer navigates this latent space using SuperCollider and FluCoMa, triggering both the original “concrete” traces and their AI-generated “distorted echoes.” The algorithm serves not as an autonomous agent, but as a refracting lens, forcing the performer to negotiate between faithful recall and neural hallucination. The result is a fragile dialogue between the fixity of the past and the malleability of the present, exploring how computational tools can actualize memory as a living, reconstructive act.
The work asks: How do we perform memory in an age of machine learning? Not by having machines remember for us, but by creating dialogues with computational systems that reorganize our experiences according to their own logic, forcing us to rediscover our own histories through unfamiliar maps.
About the artist
Riccardo Mazza (Turin 1963). Composer, multimedia artist, and faculty member at the Scuola di Alto Perfezionamento Musicale di Saluzzo. He collaborates with SMET (Electronic Music School) at the Conservatorio di Torino and the Conservatorio Ghedini in Cuneo, and is internationally recognized for his research in psychoacoustics and spatial audio.
In 1997 he began a collaboration with Franco Battiato, focusing on new technologies for sound. Between 1999–2000 he created the Renaissance SFX library, the first Dolby Surround encoded spatial effects and field recording collection for cinema and television. Later developed SoundBuilder, software for object-based surround design presented at AES 2003 in San Francisco, which anticipated Dolby Atmos.
He founded Interactive Sound in 2001, a research studio dedicated to multimedia exhibitions and immersive installations, and in 2003 patented a psychoacoustic model of “sleep waves.” With Laura Pol, he co-founded Project-TO (2015), an electronic and visual project that has released four albums and appeared at major festivals including TFF, TJF, Robot, Share Festival.
Since 2018, he directs Experimental Studios in Turin, one of Europe’s leading Dolby Atmos recording facilities. His current project Sonic Earth explores environmental sonification and algorithmic composition, and has been presented internationally at ICMC 2025 in Boston, FARM/SPLASH 2026 in Singapore, SBCM 2025 (Brazil), IEEE 2025 (L’Aquila).
John C.S. Keston Gradient Noise: Animated Scores with Corresponding Data Streams
Since 2019 I have been composing animated graphic scores for ensembles and soloists. These generative works are projected for both the performers and audience to experience. Custom software runs during the performance to create the computer graphics and geometric forms. Rules are established on how the forms are read, but improvisation and the emotional response of the performer still play an integral part in each piece. Fixed media of this work does not suffice because it lacks the realtime, generative, and participatory aspects that create surprise and challenges for the performers.
More recently I began composing scores that not only generate animated visuals, but also stream corresponding MIDI data that impacts the timbre and signal processing of the electronic instruments used by the performers. The instruments are either hardware based synthesizers or virtual instruments within a DAW such as Ableton Live. One of my recent compositions applies these streams of data to four layers of FM synthesis engines running within the Dirtywave M8, a technically advanced, modern, hardware tracker.
My newest work in progress, Gradient Noise, translates values generated by the Perlin noise algorithm into independent layers of seamless loops repeating at variable intervals. These loops are visualised as geometric forms, abstract visualisations, and evolving structures. The data generated is innovative because although aleatoric, the values can be tuned to range between slowly moving gradients or rapid, angular forms. When the sound and visuals are synchronized the performer responds not only to the animation but also to the changes in the timbre of their instruments.
The debut of Gradient Noise will address the themes of Innovation, Translation, and Participation by rethinking the relationships between musicians and machines. By translating the properties of n-dimensional Perlin noise into a musical language, the piece presents a unified ecosystem with coordinated timbres and geometric forms. The innovation lies in generating a living environment that requires active participation and improvisation in contrast to static notation. Ultimately, the work presents a contemporary model for computer music where the performer does not simply follow a score, but negotiates a path through a responsive, multi-sensory experience.
About the artist
John C.S. Keston is an award winning transdisciplinary artist reimagining how music, video art, and computer science intersect. His work both questions and embraces his backgrounds in music technology, software development, and improvisation leading him toward unconventional compositions that convey a spirit of discovery and exploration through the use of graphic scores, chance and generative techniques, analog and digital synthesis, experimental sound design, signal processing, and acoustic piano. Performers are empowered to use their phonomnesis, or sonic imaginations, while contributing to his collaborative work. Keston founded the sound design resource, AudioCookbook.org, where you will find articles and documentation about his projects and research.
John has spoken, performed, or exhibited original work at SEAMUS (2025), Radical Futures (2024), New Interfaces for Musical Expression (NIME 2022), the International Computer Music Conference (ICMC 2022), the International Digital Media Arts Conference (iDMAa 2022), International Sound in Science Technology and the Arts (ISSTA 2017-2019), Northern Spark (2011-2017), the Weisman Art Museum, the Montreal Jazz Festival, the Walker Art Center, the Minnesota Institute of Art, the Eyeo Festival, INST-INT, Echofluxx (Prague), and Moogfest. In 2017 he was commissioned by the Walker Art Center to compose music for former Merce Cunningham. He has appeared on more than a dozen albums, solo albums, and collaborative works.
Nicola Leonard Hein and Viola Yip: Fluid Ontologies
In “Fluid Ontologies”, Transsonic (Nicola Leonard Hein and Viola Yip) continues to expand their intermedial artistic practice in performances. For this project, they developed their laser feedback instruments, using lasers as sound sources and solar panel microphones. With the incorporation of multichannel spatialization, Transsonic extends the spatial dimensions, sonically and visually, creating a unique audiovisual experience. The project explores and defines new concepts of the instrumentality of light in audio circuits, bringing together space, bodies, and instruments into a dynamic feedback system.
About the artists
Dr. Nicola L. Hein is a sound artist, guitarist, composer, researcher, programmer, and professor of Sound Arts and Creative Music Technology at the University of Music Lübeck.
He works with A.I.-assisted human-machine interaction, postdigital lutherie, intermedia, sound installations, augmented reality, network music,and spatial audio. His works have been realised in more than 30 countries, at festivals such as MaerzMusik Festival, Sonica Festival, Experimental Intermedia etc.
Dr. Viola Yip is an experimental performer, sound artist and instrument builder.
Her work have been presented and supported by places such as Stanford University, UC Berkeley, Harvard University, Cycling ‘74 Expo, Hong Kong Arts Center, Academy of Media Arts Cologne, Academy of the Arts Berlin, KTH Royal Institute of Technology Sweden, Elektronmusikstudion EMS Stockholm, NOTAM Oslo, Arter Museum Istanbul, Serralves Museum of Contemporary Arts Porto and Pinakothek der Moderne in Münich.
violayip.com
Kasey Pocius: On The Edge
On the Edge is an audiovisual work for video, T-Stick and surround sound. This audiovisual work explores sounds and images of objects often on the edges of perception our perceptions, as well as processing and results from edge cases in musical algorithms and technology.
The piece consists of four interlayered vignettes, exploring the behaviour and textural qualities of various edge and peak detection algorithms to create the fixed media. These files are then used for the corpus for the granular synthesis controlled by the T-Stick. The gestural data from the T-Stick is sent from Max to Ossia, where it is used to manipulate the treatment of the video clips in real-time.
The technical aspects of the work consist of a fixed-media ambisonic file, with real-time manipulation of video clips (in Ossia Score) and multichannel granular synthesis (in Max) controlled by the T-Stick.
About the artist
Kasey Pocius is a gender-fluid intermedia artist and researcher based in Montreal, teaching at Concordia and active with CIRMMT, IDMIL, LePARC, and GRMS. They create electroacoustic and audiovisual works that explore interactive electronics, spatial sound and collaborative improvisation, with pieces programmed globally from DIY spaces to Harvard.
