Club Concert 2C
Club Concert 2C invites you to an extraordinary sonic experience in the state-of-the-art Production Lab of the ligeti center. On a specialized 20.8-channel system, international artists unfold immersive sound worlds ranging from physical gesture to complex AI analysis.
Experience the synergy of historical depth and futuristic technology—an evening in which the audience quite literally immerses itself in sound.
Program Overview
Dinosaur, Glitched!
Fernando Lopez-Lezcano
Fause, Fause
Jules Rawlinson
Live ‘Shō’ Coding – Algorithmic Improvisation of Aitake Harmonies
Atsushi Tadokoro
Quiet Catastrophe Unleashed
Nicola Casetta
Again
Julian Green
Percepts (excerpt)
Doron Klant Sadja
Cosmologies 3
Aaron Einbond
About the pieces & the artists
Fernando Lopez-Lezcano: Dinosaur, Glitched!
This is another ditty to add to the Dinosaur Songbook, a music composition and performance project that started when the COVID pandemic kick-started a round of modular synthesizer building. This was a return to my roots, as I started my discovery of electronic sound by designing and building modular synths from scratch in the late 70’s and early 80’s.
“Carlitos” is the small Eurorack synth filled with modular goodies that will be used in this performance. It will be helped, as has become the norm, by the miniature Kastle, probably the best birthday present ever, and the smallest dinosaur I have in my herd. Carlitos houses an eclectic mix of analog, digital and hybrid modules that has been evolving over several years and many concerts.
This round of noises comes courtesy of continued experiments coding in the Droid voltage processor computer language. One addition has been an implementation of Rob Hordijk’s Rungler circuit. This is a “low frequency” Rungler as the Droid is not fast enough to process voltages at audio rates, and while it will never sound like the original, it does provide a never-ending cornucopia of chaotic behaviors. As it is software, many additional features were added, in part to further confuse the performer who has even more knobs and controls to handle, with the same brain power as before. Many other sources of sound make up the piece, from complex oscillators with multiple feedback paths to fingers scratching a built-in microphone, to an emulation of the Radio Music module with additional sampled voices. Various granular synthesis systems play a constant role in the sound universe of the piece.
As always all sounds are piped through a Linux computer running SooperLoopy, a SuperCollider program written by the composer that spatializes sounds dynamically in realtime using HOA (High Order Ambisonics), and includes asynchronous loopers with a granular synthesis core that can sample, replay and process more screaming dinosaur layers than you can count.
About the artist
Fernando Lopez-Lezcano was given a choice of instruments when he was a kid and liked the piano best. His dad was an engineer and philosopher and his mother loved biology, music and the arts. He studied both music and engineering, and in his creative artistic work he tries to keep art and science chaotically balanced. He has been working at CCRMA since 1993 and throws computers, software algorithms, engineering and sound into a blender, serving the result over many speakers. He can hack Linux for a living, and sometimes he likes to pretend he can still play the piano.
He built El Dinosaurio (an analog modular synth) from scratch more than 40 years ago, and it still sings its modular songs. He also loves to distill music from pure software and uses computer languages as scoring tools to carve music from text. He returned to realtime performances with an ever growing modular synthesizer herd, including the original El Dinosaurio. He was the Edgard-Varèse Guest Professor at TU Berlin in 2008 and has been teaching the “Sound in Space” course at CCRMA for quite a while. He has also likes designing and building “things”, including Ambisonics microphones (the SpHEAR project) and 3d sound diffusion spaces (the Listening Room and Stage systems at CCRMA, and our “portable” GRAIL concert speaker array).
He feels happiest when playing music and making weird noises, even better when playing with friends, and even better on stage.
Jules Rawlinson: Fause, Fause
Fause, Fause (c. 7mins) is one scene from an interactive audiovisual work that brings together different strands of creative computing, sound design and composition. The work combines elements of game audio, computer music, traditional Scots folk song and highly detailed virtual landscapes to create an immersive songscape where the player traces the deconstructed ghosts of a song that features heavily processed fragments of the traditional ballad Fause, Fause sung by Scottish music specialist Lori Watson. These fragments are dispersed throughout the virtual landscape using mixed approaches of fixed and indeterminate elements to create pathways of sound, sound pathways as desire lines (Bandt 2006), encouraging exploration and reflection. The result is a series of speculative sonic narratives that re-sound space and place through what Hernandez (2017) describes as “psycho-sonic cartography”. The work reconsiders electroacoustic soundscape in an interactive medium, bringing together compositional, cultural and environmental considerations and makes use of creative applications of game-audio technologies for non-gaming purposes. The work will be performed by the composer across a multichannel audio system to highlight the spatial character and timbral qualities of the work.
About the artist
Jules Rawlinson (1969) is an audio-visual composer and working in solo and collaborative settings, and Programme Director for Sound Design at The University of Edinburgh Recent outputs make innovative use of archival material and corpus-based aesthetics of transformation across interactives, performances and fixed media works.
Atsushi Tadokoro: Live ‘Shō’ Coding – Algorithmic Improvisation of Aitake Harmonies
“Live ‘Shō’ Coding” is an experimental performance that merges the ancient tradition of Japanese Gagaku with contemporary live coding. The title is a play on the homophone between the Japanese instrument “shō” (笙) and the English word “Show.” This pun encapsulates the work’s core intent: to reveal the internal logic of a millennium-old instrument through the transparent medium of real-time programming.
The shō is a mouth organ consisting of seventeen bamboo pipes. Unlike Western instruments that often prioritize melody, the shō is primarily harmonic, characterized by “aitake” (合竹)—six-note tone clusters that function as static blocks of timbre. Originating from the Chinese “sheng” of the Tang Dynasty, the Japanese shō has remained structurally unchanged for over 1,200 years. It serves as a rare instance of “frozen” historical sound, preserved by the rigid rituals of court music.
Technically, the performance is realized through TidalCycles and SuperCollider. The sound is not pre-recorded but generated via real-time synthesis. Crucially, the system employs Pythagorean tuning rather than modern equal temperament to replicate the instrument’s pure resonance and distinct intervals. Within this digital environment, “aitake” clusters are defined as algorithmic patterns, enabling the performer to improvise with ancient harmonies using computational precision.
The musical narrative follows an evolutionary arc from the archaic to the modern. The piece begins with a faithful algorithmic reconstruction of traditional Gagaku aesthetics—static, sustained, and serene. As the code evolves, the strict definitions of the “aitake” are deconstructed through stochastic functions, rhythmic displacements, and spectral shifts. Consequently, the organic textures of bamboo dissolve into digital artifacts, transforming sacred harmony into abstract soundscapes.
Ultimately, “Live ‘Shō’ Coding” challenges our perception of time. It juxtaposes the cyclic, non-linear time of Gagaku with the discrete, clock-based time of the CPU. By subjecting ancient sounds to modern syntax, the work fosters a dialogue where the “breath of the phoenix” is reimagined through the binary logic of the machine.
About the artist
Atsushi Tadokoro
He is a live coder and creative coder exploring the boundaries of sound and visual art. He serves as an associate professor at Maebashi Institute of Technology and a part-time lecturer at Tokyo University of the Arts and Keio University.
Born in 1972, he creates musical works through algorithmic sound synthesis and performs live improvisations with sound and visuals using a laptop. In recent years, he has also produced and internationally exhibited numerous audio-visual installation works.
His work has been selected for major international conferences, including the International Computer Music Conference (ICMC) in 2025, 2024, 2015, and 1996; the International Conference on Live Coding (ICLC) in 2025, 2024, 2020, 2019, 2016, and 2015; and New Interfaces for Musical Expression (NIME) in 2016.
He teaches various courses on creative coding at the university level. His lecture materials, publicly available on his website (https://yoppa.org/), serve as a valuable resource for numerous students and creators.
He is the author of several books, including Beyond Interaction: A Practical Guide to openFrameworks for Creative Coding (BNN, 2020), Performative Programming: The Art and Practice of Live Coding – Show Us Your Screens (BNN, 2018), and An Introduction to Creative Coding with Processing: Creative Expression Through Code (Gijutsu-Hyohron, 2017).
Nicola Casetta: Quiet Catastrophe Unleashed
Quiet Catastrophe Unleashed is a performance for solo live electronics based on an eight- channel dynamic feedback system. Informed by Stephen Wolfram’s notion that simple iterative rules can generate irreducible complexity, the work investigates how minimal operations— modulated delays, adaptive limiting, nonlinear distortion, and continuously evolving chaotic equations—produce sonic forms that cannot be predicted or reduced to their initial conditions. The system is activated by a single impulse and evolves through recursive transformations that amplify micro-instabilities into shifting textures and emergent structures. These processes resonate with Deleuze’s conception of becoming: sound as a field of continuous variation rather than a fixed object. The performer navigates this unstable environment in real time, engaging with a machine whose behavior unfolds at the intersection of determinism and contingency. Quiet Catastrophe Unleashed operates on the edge of chaos, where sonic order arises through the continual negotiation of instability.
About the artist
Nicola Casetta is a computer musician, live electronics performer, and scholar. His work explores sound as a network of relationships—a complex, interconnected phenomenon that unfolds in an immersive and inclusive way. Through live electronics, he creates music that captures the essence of the here and now, embracing spontaneity and the vitality of the moment. He uses sound as a medium to investigate new ways of interacting with both the environment and society, creating spaces for reflection and transformation. His music has been perfomed at To listen To in Tourin (IT), SAG in Leicester (UK), CNMAT (Berkeley), Angelica Festival Bologna, Festiva di Nuova Consonanza Roma (IT), Borealis in Bergen (NO), Festival DME in Lisbon (PT), Festival Zeit fur Neue Musik in Rockhenhausen (DE), Manifeste Ircam in Paris, Ma/In in Matera (IT), 8th FKL Symposium(IT) , NYCEMF, ICMC in Athens (GR), XX CIM in Rome (IT), SoundKitchen (UK), Sweet Thunder Festival of Electro-Acoustic Music in San Francisco (US), UCSD Music – CPMC Theathre in San Diego (US) and Premio Phonologia in Milan among others.
Julian Green: Again
Again is a live electroacoustic performance structured as a stream of consciousness, in which repeated physical gestures function as both material and form. The performer cycles through a limited set of recurring actions intended to “cradle” a fleeting, beautiful moment; over time, this repetition shifts from preservation toward compulsion, foregrounding the tension between holding on and letting go. These gestural loops accumulate and cross thresholds that trigger new sonic layers, including processed vocal statements, musical textures, and environmental sound events. Rather than presenting discrete movements, the work unfolds through gradual intensification and release, emphasizing how replay can simultaneously comfort and erode, as memory morphs with each return.
In the latter portion of the performance, a recorded spoken message introduces an explicit reflective frame, calling for interpersonal awareness of desire and a move away from reliance on possessions in recognition of life’s ephemerality. Again uses repetition as a performative engine to examine attachment, impermanence, and the unstable fidelity of remembrance.
Program Notes:
past lives Again. Lost, but love lingers lackadaisically through lumbering leaps within another. Foregone are the chains that bind our sense of reason towards another hopeful realization into an unresolved calling. Gone are the worries of the mind that haunts our humanity to bind to desires towards our sense of self, compressed within a fragment of our lifespan. Only to one day meet the people we cherished deeply, degrading our memories, morphing in and out of consciousness within every trickle of sorrow that sheds our being before returning to our
About the artist
Julian Green is a U.S.-based electroacoustic composer and performer focused on data-driven instruments and live electronics. He has participated in Hypercube Ensemble’s Cubelab workshop, with works performed and recorded in the U.S. and internationally, including Sonic Apparitions (Duino, Italy). Notable works include Sound Waits, Cherish the Space, My Festering Synapses, An Indeterminate Schism, and We Don’t Unknow. His piece The Inconsistent Continuities was professionally recorded for Hypercube Ensemble and commissioned for the Kingler Electroacoustic Residency (KEAR) at Bowling Green State University. Recent projects include Breakthroughs (Wacom tablet), Again (GameTrak controller), and If We Could Forget It Gently Together: Vestige Series (custom 3D-printed gyro controller), realized at the University of Oregon. Green holds a BM in composition from Arkansas State University and an MM from Bowling Green State University, and is pursuing a doctorate at the University of Oregon. Influences include Denis Smalley, Michel Chion, Trevor Wishart, Hildegard Westerkamp, Ryuichi Sakamoto, and Elaine Lillios.
Aaron einbond: Cosmologies 3
Cosmologies 3 situates the listener inside a virtual grand piano to experience its secret inner life. The piano interior, recorded with a spherical microphone array, is complemented by three-dimensional (3-D) field recordings of Paris’s Place Igor Stravinsky. These recordings are highlighted and underlined with computer synthesis using artificial intelligence (AI) to reproduce the spatial presence of acoustic instruments, while the microcosm of the piano’s inner space expands larger-than-life.
Cosmologies 3 is part of a modular series of works that use AI to inform sound spatialization. The situated spatial presence of musical instruments has been well studied in the fields of acoustics and music perception research, but so far has not been the focus of research on AI and music. Cosmologies seeks to “re-embody” recorded sound using data derived from natural acoustic phenomena in an immersive sonic environment where real and virtual sources blend seamlessly. Cosmologies 3 for Ambisonic fixed media may be performed on its own or directly following Cosmologies for piano and 3-D electronics, with the fixed media work beginning as the live performer leaves the stage. Although the human–AI interaction in the fixed work is no longer live, it remains as a trace of the work’s creation process, refracting the human performer’s presence behind the spatial audio recordings (see Fig. 1).
Cosmologies is among the first works to connect audio descriptor analysis and corpus-based syn- thesis to 3-D spatialization using Higher-Order Ambisonics (HOA) and machine learning (ML). At the same time, it is the first project connecting the computer programs Max, Python, and OM# (Bresson et al. 2017) with the associated packages Spat (Carpentier 2018) and Mubu (Schnell et al. 2009). These software tools are used to draw upon natural acoustic phenomena as source material for spatial sound derived from two sources: one is a 3-D microphone array, the EM32 Eigenmike by mh acoustics (https://mhacoustics.com/products), a 32-channel array used to capture 3-D piano samples as well as ambient field recordings. The other source is generative spatial sound synthesis produced through ML of an existing large database of radiation measurements for acoustic instruments (Shabtai et al. 2017; Weinzierl et al. 2017). This database serves as a training set for ML models to control spatially rich 3-D patterns for electronic synthesis. These two sources of spatial sound are intentionally overlapped and fused so the listener cannot easily distinguish or segregate the sources. The aesthetic goal is to create a setting for curious and detailed listening, where one may not discern the “sleight of hand” between the superposed 3-D spaces of the sample recordings and computer synthesis.
About the artist
Aaron Einbond’s work explores the intersection of instrumental music, field recording, sound installation, and interactive technology. He released portrait albums Cosmologies with the Riot Ensemble, Without Words with Ensemble Dal Niente, and Cities with Yarn/Wire and Matilde Meireles. His awards include a Giga-Hertz Förderpreis, a Guggenheim Fellowship, and artistic-research residencies at IRCAM and ZKM. He teaches music composition and technology at City St George’s, University of London.
