BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//ICMC HAMBURG 2026 - ECPv6.15.20//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:ICMC HAMBURG 2026
X-ORIGINAL-URL:http://icmc2026.ligeti-zentrum.de
X-WR-CALDESC:Events for ICMC HAMBURG 2026
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:Europe/Amsterdam
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20250330T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20251026T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20260329T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20261025T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20270328T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20271031T010000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260511T210000
DTEND;TZID=Europe/Amsterdam:20260511T230000
DTSTAMP:20260423T102500
CREATED:20260421T145800Z
LAST-MODIFIED:20260422T111604Z
UID:10000067-1778533200-1778540400@icmc2026.ligeti-zentrum.de
SUMMARY:Club Concert 1C
DESCRIPTION:Immerse yourself in a 20.8-channel sound world: in the Production Lab of the Ligeti Center\, neural synthesis\, artificial intelligence\, and interactive visuals merge into an immersive live experience. International artists present innovative prototypes—from AI-augmented string instruments to dynamic graphic scores. \n  \nProgram Overview\nZwischenheit \nRiccardo Ancona \nKnitting\nBrian Lindgren \nSonic Memories: A Live Coding Performance with Machine-Learned Sound Fragments\nRiccardo Mazza \nGradient Noise: Animated Scores with Corresponding Data Streams\nJohn C.S. Keston \nFluid Ontologies\nNicola Leonard Hein and Viola Yip \nOn The Edge\nKasey Pocius \nScarittera – Subterranean Eruptions of Sonic Memory\nDanilo Randazzo \n\n\n  \nAbout the pieces & artists\nRicardo Ancona: Zwischenzeit \nCosmologies 3 situates the listener inside a virtual grand piano to experience its secret inner life. The piano interior\, recorded with a spherical microphone array\, is complemented by three-dimensional (3-D) field recordings of Paris’s Place Igor Stravinsky. These recordings are highlighted and underlined with computer synthesis using artificial intelligence (AI) to reproduce the spatial presence of acoustic instruments\, while the microcosm of the piano’s inner space expands larger-than-life. \n\nCosmologies 3 is part of a modular series of works that use AI to inform sound spatialization. The situated spatial presence of musical instruments has been well studied in the fields of acoustics and music perception research\, but so far has not been the focus of research on AI and music. Cosmologies seeks to “re-embody” recorded sound using data derived from natural acoustic phenomena in an immersive sonic environment where real and virtual sources blend seamlessly. Cosmologies 3 for Ambisonic fixed media may be performed on its own or directly following Cosmologies for piano and 3-D electronics\, with the fixed media work beginning as the live performer leaves the stage. Although the human–AI interaction in the fixed work is no longer live\, it remains as a trace of the work’s creation process\, refracting the human performer’s presence behind the spatial audio recordings (see Fig. 1). \nCosmologies is among the first works to connect audio descriptor analysis and corpus-based syn- thesis to 3-D spatialization using Higher-Order Ambisonics (HOA) and machine learning (ML). At the same time\, it is the first project connecting the computer programs Max\, Python\, and OM# (Bresson et al. 2017) with the associated packages Spat (Carpentier 2018) and Mubu (Schnell et al. 2009). These software tools are used to draw upon natural acoustic phenomena as source material for spatial sound derived from two sources: one is a 3-D microphone array\, the EM32 Eigenmike by mh acoustics (https://mhacoustics.com/products)\, a 32-channel array used to capture 3-D piano samples as well as ambient field recordings. The other source is generative spatial sound synthesis produced through ML of an existing large database of radiation measurements for acoustic instruments (Shabtai et al. 2017; Weinzierl et al. 2017). This database serves as a training set for ML models to control spatially rich 3-D patterns for electronic synthesis. These two sources of spatial sound are intentionally overlapped and fused so the listener cannot easily distinguish or segregate the sources. The aesthetic goal is to create a setting for curious and detailed listening\, where one may not discern the “sleight of hand” between the superposed 3-D spaces of the sample recordings and computer synthesis. \nAbout the artist\nAaron Einbond’s work explores the intersection of instrumental music\, field recording\, sound installation\, and interactive technology. He released portrait albums Cosmologies with the Riot Ensemble\, Without Words with Ensemble Dal Niente\, and Cities with Yarn/Wire and Matilde Meireles. His awards include a Giga-Hertz Förderpreis\, a Guggenheim Fellowship\, and artistic-research residencies at IRCAM and ZKM. He teaches music composition and technology at City St George’s\, University of London. \n  \nBrian Lindgren: Knitting \nKnitting is a new work for the EV\, an augmented bowed string instrument that integrates IRCAM’s RAVE (Realtime Audio Variational autoEncoder) neural synthesis model. The composition explores how machine learning can extend the timbral vocabulary of a traditional gestural practice—not by imposing external sonic material\, but by folding the instrument’s own acoustic identity back through a neural lens. \nThe EV combines a 3D-printed body with four infrared optical pickups whose signals are processed by a Bela board and transmitted to a laptop running Pure Data. Each string controls an independent synthesis engine comprising convolution\, physical modeling\, granular processing\, reverb\, and ambisonic spatialization. The recent addition of RAVE introduces a self-referential pathway: the model was trained on four hours of the EV’s own recordings\, creating a system that listens to itself through learned representations of its sonic history. \nCentral to this integration is a control strategy that maps performance descriptors—fundamental frequency\, amplitude\, and spectral centroid—to specific dimensions of the model’s eight-dimensional latent space. By constraining each modulation source to a single latent dimension\, the relationship between gesture and neural response becomes legible: a shift in bow pressure or position translates into a navigable timbral trajectory rather than an opaque transformation. This approach distinguishes the EV from other RAVE-integrated instruments\, which often emphasize loop-based or tabletop interfaces rather than continuous bowed-string control. \nKnitting treats this latent space as a landscape of sonic possibility\, each dimension a potential resonance between physical gesture and synthesized response. The compositional process is less one of arranging fixed materials than of cultivating emergent textures—drawing out sonic filaments\, crossing and interlacing them\, balancing tensions across the tapestry. The neural model functions as a meta-resonator: a parallel pathway that refracts the instrument’s timbral identity through an alternate causal route\, revealing aspects of its sound that remain latent in conventional electroacoustic processing. \nThe work demonstrates how neural synthesis can be embedded within a hybrid instrument ecology\, extending expression beyond pitch and amplitude to make performance descriptors direct agents of timbral transformation. By grounding latent navigation in the acoustic features of bowed-string technique\, Knitting positions machine learning not as a replacement for embodied practice but as an expansion of its expressive range. \nAbout the artist\nBrian Lindgren (1983) is a composer\, researcher\, violist\, and instrument builder whose work explores the convergence of acoustic performance and digital synthesis through the EV\, a hybrid string instrument integrating lutherie and embedded computing. \nHis compositions and research have been featured at the International Computer Music Conference (ICMC)\, New Interfaces for Musical Expression (NIME) conference\, Conference on Neural Information Processing Systems (NeurIPS)\, Society for Electro-Acoustic Music in the United States (SEAMUS)\, IRCAM Forum\, and International Conference on Auditory Display (ICAD)\, as well as published in Organised Sound. His work has been performed by ensembles including HYPERCUBE\, LINÜ\, Popebama\, and Tokyo Gen’on Project. \nThe EV was a finalist in the 2026 Guthman Musical Instrument Competition and used to compose ‘two tales from the shadows of the grid’ which won first place at the IEEE Big Data 2025 3rd Workshop on AI Music Generation Competition. \nLindgren holds an MFA in Sonic Arts from Brooklyn College (Subotnick\, Geers\, Gimbrone)\, a BA from the Eastman School of Music (Graham)\, and is pursuing a PhD at the University of Virginia (Burtner). \n  \nRiccardo Mazza: Sonic Memories: A Live Coding Performance with Machine-Learned Sound Fragments \nDrawing from Henri Bergson’s concept of *durée* and Deleuze’s rhizomatic models\, “Sonic Memories” reimagines memory not as a linear chronological archive\, but as a stratified field of coexisting planes. In this live coding performance\, autobiographical sound fragments—from mechanical gears to lagoon soundscapes and fragile voices—are liberated from their timeline and reorganized by an autoencoder into a non-hierarchical\, navigable map. \nThe performance begins with the simple act of loading a personal audio file—a field recording from a journey\, a voice memo\, a musical fragment—into a computational system that immediately begins to analyze and reorganize these sonic memories according to its own logic. \nOn stage\, the audience sees everything: the code acting in real-time\, a visual map where memories become points in space\, oscilloscopes showing the transformation of sound waves. This transparency is essential—there is no mystification of the technological process\, but rather an invitation to witness the negotiation between human remembering and algorithmic interpretation. \nThe performer navigates this latent space using SuperCollider and FluCoMa\, triggering both the original “concrete” traces and their AI-generated “distorted echoes.” The algorithm serves not as an autonomous agent\, but as a refracting lens\, forcing the performer to negotiate between faithful recall and neural hallucination. The result is a fragile dialogue between the fixity of the past and the malleability of the present\, exploring how computational tools can actualize memory as a living\, reconstructive act. \nThe work asks: How do we perform memory in an age of machine learning? Not by having machines remember for us\, but by creating dialogues with computational systems that reorganize our experiences according to their own logic\, forcing us to rediscover our own histories through unfamiliar maps. \nAbout the artist\nRiccardo Mazza (Turin 1963). Composer\, multimedia artist\, and faculty member at the Scuola di Alto Perfezionamento Musicale di Saluzzo. He collaborates with SMET (Electronic Music School) at the Conservatorio di Torino and the Conservatorio Ghedini in Cuneo\, and is internationally recognized for his research in psychoacoustics and spatial audio.\nIn 1997 he began a collaboration with Franco Battiato\, focusing on new technologies for sound. Between 1999–2000 he created the Renaissance SFX library\, the first Dolby Surround encoded spatial effects and field recording collection for cinema and television. Later developed SoundBuilder\, software for object-based surround design presented at AES 2003 in San Francisco\, which anticipated Dolby Atmos.\nHe founded Interactive Sound in 2001\, a research studio dedicated to multimedia exhibitions and immersive installations\, and in 2003 patented a psychoacoustic model of “sleep waves.” With Laura Pol\, he co-founded Project-TO (2015)\, an electronic and visual project that has released four albums and appeared at major festivals including TFF\, TJF\, Robot\, Share Festival.\nSince 2018\, he directs Experimental Studios in Turin\, one of Europe’s leading Dolby Atmos recording facilities. His current project Sonic Earth explores environmental sonification and algorithmic composition\, and has been presented internationally at ICMC 2025 in Boston\, FARM/SPLASH 2026 in Singapore\, SBCM 2025 (Brazil)\, IEEE 2025 (L’Aquila). \n  \nJohn C.S. Keston Gradient Noise: Animated Scores with Corresponding Data Streams\nSince 2019 I have been composing animated graphic scores for ensembles and soloists. These generative works are projected for both the performers and audience to experience. Custom software runs during the performance to create the computer graphics and geometric forms. Rules are established on how the forms are read\, but improvisation and the emotional response of the performer still play an integral part in each piece. Fixed media of this work does not suffice because it lacks the realtime\, generative\, and participatory aspects that create surprise and challenges for the performers. \nMore recently I began composing scores that not only generate animated visuals\, but also stream corresponding MIDI data that impacts the timbre and signal processing of the electronic instruments used by the performers. The instruments are either hardware based synthesizers or virtual instruments within a DAW such as Ableton Live. One of my recent compositions applies these streams of data to four layers of FM synthesis engines running within the Dirtywave M8\, a technically advanced\, modern\, hardware tracker. \nMy newest work in progress\, Gradient Noise\, translates values generated by the Perlin noise algorithm into independent layers of seamless loops repeating at variable intervals. These loops are visualised as geometric forms\, abstract visualisations\, and evolving structures. The data generated is innovative because although aleatoric\, the values can be tuned to range between slowly moving gradients or rapid\, angular forms. When the sound and visuals are synchronized the performer responds not only to the animation but also to the changes in the timbre of their instruments. \nThe debut of Gradient Noise will address the themes of Innovation\, Translation\, and Participation by rethinking the relationships between musicians and machines. By translating the properties of n-dimensional Perlin noise into a musical language\, the piece presents a unified ecosystem with coordinated timbres and geometric forms. The innovation lies in generating a living environment that requires active participation and improvisation in contrast to static notation. Ultimately\, the work presents a contemporary model for computer music where the performer does not simply follow a score\, but negotiates a path through a responsive\, multi-sensory experience. \nAbout the artist\nJohn C.S. Keston is an award winning transdisciplinary artist reimagining how music\, video art\, and computer science intersect. His work both questions and embraces his backgrounds in music technology\, software development\, and improvisation leading him toward unconventional compositions that convey a spirit of discovery and exploration through the use of graphic scores\, chance and generative techniques\, analog and digital synthesis\, experimental sound design\, signal processing\, and acoustic piano. Performers are empowered to use their phonomnesis\, or sonic imaginations\, while contributing to his collaborative work. Keston founded the sound design resource\, AudioCookbook.org\, where you will find articles and documentation about his projects and research. \nJohn has spoken\, performed\, or exhibited original work at SEAMUS (2025)\, Radical Futures (2024)\, New Interfaces for Musical Expression (NIME 2022)\, the International Computer Music Conference (ICMC 2022)\, the International Digital Media Arts Conference (iDMAa 2022)\, International Sound in Science Technology and the Arts (ISSTA 2017-2019)\, Northern Spark (2011-2017)\, the Weisman Art Museum\, the Montreal Jazz Festival\, the Walker Art Center\, the Minnesota Institute of Art\, the Eyeo Festival\, INST-INT\, Echofluxx (Prague)\, and Moogfest. In 2017 he was commissioned by the Walker Art Center to compose music for former Merce Cunningham. He has appeared on more than a dozen albums\, solo albums\, and collaborative works. \nNicola Leonard Hein and Viola Yip: Fluid Ontologies\nIn “Fluid Ontologies”\, Transsonic (Nicola Leonard Hein and Viola Yip) continues to expand their intermedial artistic practice in performances. For this project\, they developed their laser feedback instruments\, using lasers as sound sources and solar panel microphones. With the incorporation of multichannel spatialization\, Transsonic extends the spatial dimensions\, sonically and visually\, creating a unique audiovisual experience. The project explores and defines new concepts of the instrumentality of light in audio circuits\, bringing together space\, bodies\, and instruments into a dynamic feedback system. \nAbout the artists\nDr. Nicola L. Hein is a sound artist\, guitarist\, composer\, researcher\, programmer\, and professor of Sound Arts and Creative Music Technology at the University of Music Lübeck.\nHe works with A.I.-assisted human-machine interaction\, postdigital lutherie\, intermedia\, sound installations\, augmented reality\, network music\,and spatial audio. His works have been realised in more than 30 countries\, at festivals such as MaerzMusik Festival\, Sonica Festival\, Experimental Intermedia etc. \nDr. Viola Yip is an experimental performer\, sound artist and instrument builder.\nHer work have been presented and supported by places such as Stanford University\, UC Berkeley\, Harvard University\, Cycling ‘74 Expo\, Hong Kong Arts Center\, Academy of Media Arts Cologne\, Academy of the Arts Berlin\, KTH Royal Institute of Technology Sweden\, Elektronmusikstudion EMS Stockholm\, NOTAM Oslo\, Arter Museum Istanbul\, Serralves Museum of Contemporary Arts Porto and Pinakothek der Moderne in Münich. \nviolayip.com \n  \nKasey Pocius: On The Edge \nOn the Edge is an audiovisual work for video\, T-Stick and surround sound. This audiovisual work explores sounds and images of objects often on the edges of perception our perceptions\, as well as processing and results from edge cases in musical algorithms and technology. \nThe piece consists of four interlayered vignettes\, exploring the behaviour and textural qualities of various edge and peak detection algorithms to create the fixed media. These files are then used for the corpus for the granular synthesis controlled by the T-Stick. The gestural data from the T-Stick is sent from Max to Ossia\, where it is used to manipulate the treatment of the video clips in real-time. \nThe technical aspects of the work consist of a fixed-media ambisonic file\, with real-time manipulation of video clips (in Ossia Score) and multichannel granular synthesis (in Max) controlled by the T-Stick. \nAbout the artist\nKasey Pocius is a gender-fluid intermedia artist and researcher based in Montreal\, teaching at Concordia and active with CIRMMT\, IDMIL\, LePARC\, and GRMS. They create electroacoustic and audiovisual works that explore interactive electronics\, spatial sound and collaborative improvisation\, with pieces programmed globally from DIY spaces to Harvard. \n  \n\n\n\n 
URL:http://icmc2026.ligeti-zentrum.de/event/club-concert-1c/
LOCATION:ligeti center\, Production Lab (10th floor)\, Veritaskai 1\, Hamburg\, 21079\, Germany
CATEGORIES:11-05,Club Concert,Music
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260512T210000
DTEND;TZID=Europe/Amsterdam:20260512T230000
DTSTAMP:20260423T102500
CREATED:20260421T150351Z
LAST-MODIFIED:20260422T115005Z
UID:10000068-1778619600-1778626800@icmc2026.ligeti-zentrum.de
SUMMARY:Club Concert 2C
DESCRIPTION:Club Concert 2C invites you to an extraordinary sonic experience in the state-of-the-art Production Lab of the ligeti center. On a specialized 20.8-channel system\, international artists unfold immersive sound worlds ranging from physical gesture to complex AI analysis.\nExperience the synergy of historical depth and futuristic technology—an evening in which the audience quite literally immerses itself in sound. \n  \nProgram Overview\n\n\nDinosaur\, Glitched! \nFernando Lopez-Lezcano \nFause\, Fause\nJules Rawlinson \nLive ‘Shō’ Coding – Algorithmic Improvisation of Aitake Harmonies\nAtsushi Tadokoro \nQuiet Catastrophe Unleashed\nNicola Casetta \nAgain\nJulian Green \nPercepts (excerpt)\nDoron Klant Sadja \nCosmologies 3\nAaron Einbond \n\n\n  \nAbout the pieces & the artists\nFernando Lopez-Lezcano: Dinosaur\, Glitched!  \nThis is another ditty to add to the Dinosaur Songbook\, a music composition and performance project that started when the COVID pandemic kick-started a round of modular synthesizer building. This was a return to my roots\, as I started my discovery of electronic sound by designing and building modular synths from scratch in the late 70’s and early 80’s. \n“Carlitos” is the small Eurorack synth filled with modular goodies that will be used in this performance. It will be helped\, as has become the norm\, by the miniature Kastle\, probably the best birthday present ever\, and the smallest dinosaur I have in my herd. Carlitos houses an eclectic mix of analog\, digital and hybrid modules that has been evolving over several years and many concerts. \nThis round of noises comes courtesy of continued experiments coding in the Droid voltage processor computer language. One addition has been an implementation of Rob Hordijk’s Rungler circuit. This is a “low frequency” Rungler as the Droid is not fast enough to process voltages at audio rates\, and while it will never sound like the original\, it does provide a never-ending cornucopia of chaotic behaviors. As it is software\, many additional features were added\, in part to further confuse the performer who has even more knobs and controls to handle\, with the same brain power as before. Many other sources of sound make up the piece\, from complex oscillators with multiple feedback paths to fingers scratching a built-in microphone\, to an emulation of the Radio Music module with additional sampled voices. Various granular synthesis systems play a constant role in the sound universe of the piece. \nAs always all sounds are piped through a Linux computer running SooperLoopy\, a SuperCollider program written by the composer that spatializes sounds dynamically in realtime using HOA (High Order Ambisonics)\, and includes asynchronous loopers with a granular synthesis core that can sample\, replay and process more screaming dinosaur layers than you can count. \nAbout the artist\nFernando Lopez-Lezcano was given a choice of instruments when he was a kid and liked the piano best. His dad was an engineer and philosopher and his mother loved biology\, music and the arts. He studied both music and engineering\, and in his creative artistic work he tries to keep art and science chaotically balanced. He has been working at CCRMA since 1993 and throws computers\, software algorithms\, engineering and sound into a blender\, serving the result over many speakers. He can hack Linux for a living\, and sometimes he likes to pretend he can still play the piano. \nHe built El Dinosaurio (an analog modular synth) from scratch more than 40 years ago\, and it still sings its modular songs. He also loves to distill music from pure software and uses computer languages as scoring tools to carve music from text. He returned to realtime performances with an ever growing modular synthesizer herd\, including the original El Dinosaurio. He was the Edgard-Varèse Guest Professor at TU Berlin in 2008 and has been teaching the “Sound in Space” course at CCRMA for quite a while. He has also likes designing and building “things”\, including Ambisonics microphones (the SpHEAR project) and 3d sound diffusion spaces (the Listening Room and Stage systems at CCRMA\, and our “portable” GRAIL concert speaker array). \nHe feels happiest when playing music and making weird noises\, even better when playing with friends\, and even better on stage. \n  \n\nJules Rawlinson: Fause\, Fause\nFause\, Fause (c. 7mins) is one scene from an interactive audiovisual work that brings together different strands of creative computing\, sound design and composition. The work combines elements of game audio\, computer music\, traditional Scots folk song and highly detailed virtual landscapes to create an immersive songscape where the player traces the deconstructed ghosts of a song that features heavily processed fragments of the traditional ballad Fause\, Fause sung by Scottish music specialist Lori Watson. These fragments are dispersed throughout the virtual landscape using mixed approaches of fixed and indeterminate elements to create pathways of sound\, sound pathways as desire lines (Bandt 2006)\, encouraging exploration and reflection. The result is a series of speculative sonic narratives that re-sound space and place through what Hernandez (2017) describes as “psycho-sonic cartography”. The work reconsiders electroacoustic soundscape in an interactive medium\, bringing together compositional\, cultural and environmental considerations and makes use of creative applications of game-audio technologies for non-gaming purposes. The work will be performed by the composer across a multichannel audio system to highlight the spatial character and timbral qualities of the work. \nAbout the artist\nJules Rawlinson (1969) is an audio-visual composer and working in solo and collaborative settings\, and Programme Director for Sound Design at The University of Edinburgh Recent outputs make innovative use of archival material and corpus-based aesthetics of transformation across interactives\, performances and fixed media works. \n  \nAtsushi Tadokoro: Live ‘Shō’ Coding – Algorithmic Improvisation of Aitake Harmonies\n“Live ‘Shō’ Coding” is an experimental performance that merges the ancient tradition of Japanese Gagaku with contemporary live coding. The title is a play on the homophone between the Japanese instrument “shō” (笙) and the English word “Show.” This pun encapsulates the work’s core intent: to reveal the internal logic of a millennium-old instrument through the transparent medium of real-time programming. \nThe shō is a mouth organ consisting of seventeen bamboo pipes. Unlike Western instruments that often prioritize melody\, the shō is primarily harmonic\, characterized by “aitake” (合竹)—six-note tone clusters that function as static blocks of timbre. Originating from the Chinese “sheng” of the Tang Dynasty\, the Japanese shō has remained structurally unchanged for over 1\,200 years. It serves as a rare instance of “frozen” historical sound\, preserved by the rigid rituals of court music. \nTechnically\, the performance is realized through TidalCycles and SuperCollider. The sound is not pre-recorded but generated via real-time synthesis. Crucially\, the system employs Pythagorean tuning rather than modern equal temperament to replicate the instrument’s pure resonance and distinct intervals. Within this digital environment\, “aitake” clusters are defined as algorithmic patterns\, enabling the performer to improvise with ancient harmonies using computational precision. \nThe musical narrative follows an evolutionary arc from the archaic to the modern. The piece begins with a faithful algorithmic reconstruction of traditional Gagaku aesthetics—static\, sustained\, and serene. As the code evolves\, the strict definitions of the “aitake” are deconstructed through stochastic functions\, rhythmic displacements\, and spectral shifts. Consequently\, the organic textures of bamboo dissolve into digital artifacts\, transforming sacred harmony into abstract soundscapes. \nUltimately\, “Live ‘Shō’ Coding” challenges our perception of time. It juxtaposes the cyclic\, non-linear time of Gagaku with the discrete\, clock-based time of the CPU. By subjecting ancient sounds to modern syntax\, the work fosters a dialogue where the “breath of the phoenix” is reimagined through the binary logic of the machine. \nAbout the artist\nAtsushi Tadokoro\nHe is a live coder and creative coder exploring the boundaries of sound and visual art. He serves as an associate professor at Maebashi Institute of Technology and a part-time lecturer at Tokyo University of the Arts and Keio University. \nBorn in 1972\, he creates musical works through algorithmic sound synthesis and performs live improvisations with sound and visuals using a laptop. In recent years\, he has also produced and internationally exhibited numerous audio-visual installation works. \nHis work has been selected for major international conferences\, including the International Computer Music Conference (ICMC) in 2025\, 2024\, 2015\, and 1996; the International Conference on Live Coding (ICLC) in 2025\, 2024\, 2020\, 2019\, 2016\, and 2015; and New Interfaces for Musical Expression (NIME) in 2016. \nHe teaches various courses on creative coding at the university level. His lecture materials\, publicly available on his website (https://yoppa.org/)\, serve as a valuable resource for numerous students and creators. \nHe is the author of several books\, including Beyond Interaction: A Practical Guide to openFrameworks for Creative Coding (BNN\, 2020)\, Performative Programming: The Art and Practice of Live Coding – Show Us Your Screens (BNN\, 2018)\, and An Introduction to Creative Coding with Processing: Creative Expression Through Code (Gijutsu-Hyohron\, 2017). \n  \nNicola Casetta: Quiet Catastrophe Unleashed\nQuiet Catastrophe Unleashed is a performance for solo live electronics based on an eight- channel dynamic feedback system. Informed by Stephen Wolfram’s notion that simple iterative rules can generate irreducible complexity\, the work investigates how minimal operations— modulated delays\, adaptive limiting\, nonlinear distortion\, and continuously evolving chaotic equations—produce sonic forms that cannot be predicted or reduced to their initial conditions. The system is activated by a single impulse and evolves through recursive transformations that amplify micro-instabilities into shifting textures and emergent structures. These processes resonate with Deleuze’s conception of becoming: sound as a field of continuous variation rather than a fixed object. The performer navigates this unstable environment in real time\, engaging with a machine whose behavior unfolds at the intersection of determinism and contingency. Quiet Catastrophe Unleashed operates on the edge of chaos\, where sonic order arises through the continual negotiation of instability. \nAbout the artist\nNicola Casetta is a computer musician\, live electronics performer\, and scholar. His work explores sound as a network of relationships—a complex\, interconnected phenomenon that unfolds in an immersive and inclusive way. Through live electronics\, he creates music that captures the essence of the here and now\, embracing spontaneity and the vitality of the moment. He uses sound as a medium to investigate new ways of interacting with both the environment and society\, creating spaces for reflection and transformation. His music has been perfomed at To listen To in Tourin (IT)\, SAG in Leicester (UK)\, CNMAT (Berkeley)\, Angelica Festival Bologna\, Festiva di Nuova Consonanza Roma (IT)\, Borealis in Bergen (NO)\, Festival DME in Lisbon (PT)\, Festival Zeit fur Neue Musik in Rockhenhausen (DE)\, Manifeste Ircam in Paris\, Ma/In in Matera (IT)\, 8th FKL Symposium(IT) \, NYCEMF\, ICMC in Athens (GR)\, XX CIM in Rome (IT)\, SoundKitchen (UK)\, Sweet Thunder Festival of Electro-Acoustic Music in San Francisco (US)\, UCSD Music – CPMC Theathre in San Diego (US) and Premio Phonologia in Milan among others. \n  \n\nJulian Green: Again\nAgain is a live electroacoustic performance structured as a stream of consciousness\, in which repeated physical gestures function as both material and form. The performer cycles through a limited set of recurring actions intended to “cradle” a fleeting\, beautiful moment; over time\, this repetition shifts from preservation toward compulsion\, foregrounding the tension between holding on and letting go. These gestural loops accumulate and cross thresholds that trigger new sonic layers\, including processed vocal statements\, musical textures\, and environmental sound events. Rather than presenting discrete movements\, the work unfolds through gradual intensification and release\, emphasizing how replay can simultaneously comfort and erode\, as memory morphs with each return. \nIn the latter portion of the performance\, a recorded spoken message introduces an explicit reflective frame\, calling for interpersonal awareness of desire and a move away from reliance on possessions in recognition of life’s ephemerality. Again uses repetition as a performative engine to examine attachment\, impermanence\, and the unstable fidelity of remembrance. \nProgram Notes: \npast lives Again. Lost\, but love lingers lackadaisically through lumbering leaps within another. Foregone are the chains that bind our sense of reason towards another hopeful realization into an unresolved calling. Gone are the worries of the mind that haunts our humanity to bind to desires towards our sense of self\, compressed within a fragment of our lifespan. Only to one day meet the people we cherished deeply\, degrading our memories\, morphing in and out of consciousness within every trickle of sorrow that sheds our being before returning to our \nAbout the artist\nJulian Green is a U.S.-based electroacoustic composer and performer focused on data-driven instruments and live electronics. He has participated in Hypercube Ensemble’s Cubelab workshop\, with works performed and recorded in the U.S. and internationally\, including Sonic Apparitions (Duino\, Italy). Notable works include Sound Waits\, Cherish the Space\, My Festering Synapses\, An Indeterminate Schism\, and We Don’t Unknow. His piece The Inconsistent Continuities was professionally recorded for Hypercube Ensemble and commissioned for the Kingler Electroacoustic Residency (KEAR) at Bowling Green State University. Recent projects include Breakthroughs (Wacom tablet)\, Again (GameTrak controller)\, and If We Could Forget It Gently Together: Vestige Series (custom 3D-printed gyro controller)\, realized at the University of Oregon. Green holds a BM in composition from Arkansas State University and an MM from Bowling Green State University\, and is pursuing a doctorate at the University of Oregon. Influences include Denis Smalley\, Michel Chion\, Trevor Wishart\, Hildegard Westerkamp\, Ryuichi Sakamoto\, and Elaine Lillios. \n  \nAaron einbond: Cosmologies 3\nCosmologies 3 situates the listener inside a virtual grand piano to experience its secret inner life. The piano interior\, recorded with a spherical microphone array\, is complemented by three-dimensional (3-D) field recordings of Paris’s Place Igor Stravinsky. These recordings are highlighted and underlined with computer synthesis using artificial intelligence (AI) to reproduce the spatial presence of acoustic instruments\, while the microcosm of the piano’s inner space expands larger-than-life. \nCosmologies 3 is part of a modular series of works that use AI to inform sound spatialization. The situated spatial presence of musical instruments has been well studied in the fields of acoustics and music perception research\, but so far has not been the focus of research on AI and music. Cosmologies seeks to “re-embody” recorded sound using data derived from natural acoustic phenomena in an immersive sonic environment where real and virtual sources blend seamlessly. Cosmologies 3 for Ambisonic fixed media may be performed on its own or directly following Cosmologies for piano and 3-D electronics\, with the fixed media work beginning as the live performer leaves the stage. Although the human–AI interaction in the fixed work is no longer live\, it remains as a trace of the work’s creation process\, refracting the human performer’s presence behind the spatial audio recordings (see Fig. 1). \nCosmologies is among the first works to connect audio descriptor analysis and corpus-based syn- thesis to 3-D spatialization using Higher-Order Ambisonics (HOA) and machine learning (ML). At the same time\, it is the first project connecting the computer programs Max\, Python\, and OM# (Bresson et al. 2017) with the associated packages Spat (Carpentier 2018) and Mubu (Schnell et al. 2009). These software tools are used to draw upon natural acoustic phenomena as source material for spatial sound derived from two sources: one is a 3-D microphone array\, the EM32 Eigenmike by mh acoustics (https://mhacoustics.com/products)\, a 32-channel array used to capture 3-D piano samples as well as ambient field recordings. The other source is generative spatial sound synthesis produced through ML of an existing large database of radiation measurements for acoustic instruments (Shabtai et al. 2017; Weinzierl et al. 2017). This database serves as a training set for ML models to control spatially rich 3-D patterns for electronic synthesis. These two sources of spatial sound are intentionally overlapped and fused so the listener cannot easily distinguish or segregate the sources. The aesthetic goal is to create a setting for curious and detailed listening\, where one may not discern the “sleight of hand” between the superposed 3-D spaces of the sample recordings and computer synthesis. \nAbout the artist\nAaron Einbond’s work explores the intersection of instrumental music\, field recording\, sound installation\, and interactive technology. He released portrait albums Cosmologies with the Riot Ensemble\, Without Words with Ensemble Dal Niente\, and Cities with Yarn/Wire and Matilde Meireles. His awards include a Giga-Hertz Förderpreis\, a Guggenheim Fellowship\, and artistic-research residencies at IRCAM and ZKM. He teaches music composition and technology at City St George’s\, University of London. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/club-concert-2c/
LOCATION:ligeti center\, Production Lab (10th floor)\, Veritaskai 1\, Hamburg\, 21079\, Germany
CATEGORIES:12-05,Club Concert,Music
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260513T213000
DTEND;TZID=Europe/Amsterdam:20260513T233000
DTSTAMP:20260423T102500
CREATED:20260421T162148Z
LAST-MODIFIED:20260422T121025Z
UID:10000088-1778707800-1778715000@icmc2026.ligeti-zentrum.de
SUMMARY:Club Concert 3C
DESCRIPTION:Concert 3C is an exploration of the boundaries of collective improvisation and creative technology. The SPIIC Ensemble of the HfMT Hamburg presents a program in which the audience has a say\, algorithms extend historical works\, and artificial intelligence reinterprets human movement as a “hallucination.”\nIn the industrial atmosphere of the Speicher am Kaufhauskanal\, acoustic instruments merge with live coding\, neural synthesis\, and interactive notation. \n  \nProgram Overview\nLiquid tensioning\nFernando Egido \nSinophony for Clarence\nJuan Arturo Parra Cancino \nChimerique\nJonathan Wilson \nNEBULA\nEnrique Tomás and Moisés Horta Valenzuela \nplastique\nSe-Lien Chuang and Andreas Weixler \nShamanic Protocol\nOscar Corpo \nA Walk in Polygon Field\nRob Canning \nDEPRECATED\nDenis Polec Vocal \n  \nAbout the pieces & artists\nFernando Egido: Liquid tensioning\nLiquid Tensioning is a work for violin and double clarinet\, live notation\, live generative system\, live electronics\, and attendees’ participation (category: Improvised work for ensemble and electronics (SPIIC+ Ensemble)). Liquid tensioning is a Collaborative and interactive work in which the work is real time created by the self-evaluation of the work. The attendees will evaluate the work via a web app\, and the musical generative system will change according to the evaluation in real time. The Musicians will receive notes via a live notation system on their mobile phones. The title of the works refers to the model of tensioning provided by the generative system based on a musical tensioning that is not related to the properties of the musical material. This work belongs to a series of works in which the composer creates a self-referential musical generative system based on the real-time evaluation of the work. The main musical material of this work is its evaluation. The work duration is about 10 minutes. \nAbout the artist\nHe studied composition with José Luis de Delás at the School of Music of the University of Alcalá de Henares and received musical training in workshops with composers\, analysts\, and interpreters around the LIEM or the GCAC. He studied Computer Music with Emiliano del Cerro.\nHe has published several papers at international conferences.\nHis works have been performed at festivals such as ICMC 2025-2024-2023\, Bled international festival\, SMC Conference Graz\, Convergence Festival\, Ars electronica Linz\, Atemporánea Festival\, AIMC 2022 conference\, EVO 2021\, OUA Electroacoustic Music Festival 2020\, ISMIR 2020 in Montreal. The Seoul International Electroacoustic Music Festival 2019\, the ACMC 2019 conference in Melbourne\, SID 2015 conference in New York\, Venice Vending Machine III\, the New York City Electroacoustic Music Festival\, JIEN in the Auditory 400\, La hora acúsmatica\, SMASH Festival\, Encontres Festival in Palma of Majorca\, and ACA. \n  \nJuan Arturo Parra Cancino: Sinophony for Clarence\nSynophonie for Clarence is an ensemble and live electronics work inspired by the formal and sonic principles of Clarence Barlow’s Sinophony I (1970)\, his first electronic composition. Rather than functioning as an arrangement or transcription\, this piece operates as an instrumental extension of Barlow’s electronic sound world\, translating and reactivating its core materials through acoustic performance and real-time electronic processes. \nThe work seeks to bring into the physical space of performance elements that\, in Sinophony I\, exist only in fixed media: continuous tones\, slow harmonic transformations\, beating frequencies\, and the perceptual tension between purity and instability. These characteristics are reimagined here as a living\, performative situation\, where instrumental sound and electronics merge into a single\, evolving spectral body. \nSynophonie for Clarence builds on methods developed by Juan Parra Cancino to extract performative salients from early electronic works—elements that can be embodied\, negotiated\, and reshaped by performers in real time. Through this approach\, the piece revisits historical electronic material not as an object to be preserved unchanged\, but as a dynamic field for exploration\, experimentation\, and renewed artistic engagement. The aim is not reconstruction\, but continuation: to recover underlying processes and extend their implications into contemporary performance practice. \nBy situating acoustic instruments\, live electronics\, and spatialized sound within a shared listening ecology\, the work foregrounds collective tuning\, timbral fusion\, and emergent beating phenomena as central musical forces. The ensemble functions less as a group of independent voices than as a composite oscillator\, shaped by subtle interactions and shared attention. \nThis piece is conceived as a tribute to Clarence Barlow—composer\, educator\, and friend—honoring both his pioneering contributions to electronic music and his enduring influence on ways of thinking about sound\, structure\, and musical intelligence. \nAbout the artist\nJuan Parra Cancino studied Composition at the Catholic University of Chile and Sonology at the Royal Conservatoire The Hague\, where he completed a Master’s degree in electronic music. He received a PhD from Leiden University in 2014 on performance practice in computer music. A guitarist trained in Robert Fripp’s Guitar Craft\, he has worked extensively in live electronics. He is a researcher at the Orpheus Institute and Regional Director for Europe of the International Computer Music Association (2022–26). \n  \nJonathan Wilson: Chimerique\n“Chimerique” is about the interaction of music and language. Written and premiered in 2017\, this composition is for an ensemble featuring improvisation\, narration\, and electronics. It was realized in a collaboration with poet and translator Patricia Hartland by incorporating her English translation of “Ravines of Early Morning” by Raphael Confiant into a musical setting. The title is taken from a word in this text. It is French for “chimerical\,” and it can be defined as 1: something that takes delight in illusions\, or 2: something that is utopian\, or unreal. The narrator forms associations with this word through various phrases and passages that relate to the part of the story in which the description of “chimerique” is elaborated. Throughout this performance\, the performers listen and react to the text spoken by the narrator (and electronics). They are accompanied by electronics that consist of fixed media and live electronics from two different patches in Max/MSP using additive synthesis and granular synthesis. The musical instruments are the source material for granular synthesis. The score for this composition uses hybrid musical notation with some traditional notation for pitch and some graphic notation that leads performers subsequently to interpret not only the spoken phrases\, but also the graphic notation in their parts to determine volume\, pitch\, rhythm\, articulation\, and contour\, thereby making improvisation a necessity. The narrator and performers work together to generate a spontaneously formed through-composed work that marries text and music. The form can be described as through-composed in six sections. In the first section the performers respond only to a single phrase. In sections 2-6 the performers respond not only to phrases that delineate each section but also respond to extended narration shifting from descriptions of dreams\, the night\, madness\, illusions\, and at the end the act of dreaming itself. \nAbout the artist\nDr. Jonathan Wilson’s works have been performed at the Ann Arbor Film Festival\, European Media Art Festival\, ICMC\, SICMF\, SEAMUS\, NYCEMF\, MUSELAB\, NSEME\, Napoleon Electronic Music Festival\, Iowa Music Teachers Association State Conference\, and Midwest Composers Symposium. He is the winner of the 2014 Iowa Music Teachers Association Composition Competition. Jonathan has studied composition with Lawrence Fritts\, Josh Levine\, David Gompper\, James Romig\, James Caldwell\, Paul Paccione\, and John Cooper. In addition\, studies in conducting have been taken under Richard Hughey and Mike Fansler. Jonathan is a member of Society of Composers\, Inc.\, SEAMUS\, ICMA\, and the Iowa Composers Forum. \n  \nEnrique Tomás and Moisés Horta Valenzuela: NEBULA\nArtists working with deep-learning audio models often find that exploring their high-dimensional latent spaces requires chance-based\, combinatorial\, or technically complex machine-learning techniques. While these approaches can reveal unexpected possibilities\, they also make it more difficult to deliberately guide the models toward outcomes that are musically meaningful or aligned with specific creative intentions. \nIn this improvisation for solo instrument and two performers on live electronics\, we present an alternative approach to create a more interpretable and musically guided latent space exploration. This approach leverages Principal Component Analysis (PCA) applied to pre-encoded RAVE (Realtime Audio Variational Autoencoder) representations to reorganize the latent data into clusters that can be navigated more deliberately in performance. PCA reorganizes the encoded data into clusters based on shared timbral characteristics\, producing data clouds directly connected to the sonic properties of the source material. By structuring access to the latent space in this way\, our method bridges the gap between open-ended exploration and purposeful control\, offering performers a clearer and more intuitive means of shaping sound. \nTo prepare the improvisation\, and prior to the concert\, the solo instrumentalist provides an eight-minute recording that defines the sonic domain of the performance. This recording is encoded and analyzed\, restricting exploration to regions of the latent space shaped by the performer’s own material and giving the electronic musicians a more focused and musically coherent landscape to navigate. During the live performance\, the solo instrumentalist and the two electronic performers interact within this PCA-organized timbral map. Their trajectories through the latent space—along with the evolving clusters and sonic transformations—are projected in real time\, allowing the audience to see how latent-space navigation corresponds to audible change. \nThe musical materials resulting from this setup combine structured instrumental improvisation with electronically generated textures derived from latent-space navigation. While the overall form is left to real-time decisions between the soloist and the live performers\, the resulting sound world often alternates between rhythmically driven motifs—loosely recalling the interactive dynamics of small jazz ensembles—and more abstract electronic layers shaped through PCA-guided trajectories. These electronic textures\, produced by traversing clustered regions of the latent space\, serve as harmonically and timbrally evolving fields against which the soloist can articulate phrasing\, gesture\, and dynamic contour. The custom-built performance interfaces allow the electronic performers to shape these materials with precision\, enabling a responsive interplay in which acoustic action and machine-learned transformations continually inform one another. \nAbout the artists\nEnrique Tomás (*1981) is a sound artist\, researcher and assistant professor at the Tangible Music Lab who dedicates his time to finding new ways of expression and play with sound\, art and technology. His work explores the intersection between sound art\, computer music\, locative media and human-machine interaction.\nAs an individual artist\, Tomás’ activity is centered around ultranoise.es and focuses on performances and installations with extreme and immersive sounds and environments. He has exhibited and performed in spaces of Ars Electronica\, Sonar\, CTM\, IRCAM\, IEM\, KUMU\, SMAK\, NOVARS\, STEIM\, Steirischer Herbst\, Alte Schmiede\, etc.\, and in galleries and institutions throughout Europe and Latin America. \nMoisés Horta Valenzuela is a self-taught sound artist\, technologist\, musician\, and researcher from Tijuana\, Mexico\, based in Berlin. His work spans computer music\, neural audio synthesis\, conversational AI\, and the politics of emerging technologies\, approached through a critical lens that connects ancestral knowledge with contemporary digital culture. He has presented work internationally at Ars Electronica\, NeurIPS ML for Creativity & Design\, MUTEK México\, MUTEK AI Art Lab Montréal\, Transart Festival\, CTM Festival\, Elektron Musik Studion\, and the Sound and Music Computing Conference\, among others. \n  \nSe-Lien Chuang and Andreas Weixler: plastique\ninteractive audiovisual comprovisation for e-quitar\, green leaves & i-hands – GLISS – Green Leaves Imaginary Scenic Score\nDuration: ca. 8 min \nAbout the artists\nAndreas Weixler\, born 1963 in Graz\, Austria\, is a composer for computer music with an emphasis in\nintermedia realtime processing. He is teaching at the mdw Vienna\, InterfaceCulture in Linz and serves associate university professor at the CMS – computer music studio of Anton Bruckner\nUniversity in Linz where he initiated the intermedia concert hall the Sonic Lab.\nStudies of contemporary composition at KUG in Graz\, Austria with diploma by\nBeat Furrer\, completed by international projects and residencies. \nSe-Lien Chuang is a composer born in Taiwan in 1965 and based in Austria since 1991. Her work focuses on contemporary instrumental composition and improvisation\, computer music\, and audiovisual interactivity. She has presented works and lectures internationally in Europe\, Asia\, and the Americas at events such as ICMC\, ISEA\, and NIME. From 2016 to 2019\, she taught for the Computer Music Studio at Bruckner University Linz. Since 1996\, she has co-run Atelier Avant Austria\, specializing in audiovisual interactive systems\, real-time processing and computer music. \n  \nOscar Corpo: Shamanic Protocol\nShamanic Protocol is an online sound ritual performed by a partially damaged virtual entity. Its memory is an incomplete and corrupted archive\, composed of residual sonic materials related to shamanic rituals\, music therapy\, sound-based healing practices\, and data derived from musical epigenetics. Reshaped by the available data and the presence of connected users\, these fragments are reprocessed and reorganised each time the system is accessed\, generating a sonic ritual that follows a recognisable structure yet never manifests in the same way twice. The sound ritual has no declared purpose: it remains unclear whether the entity performs the rite as an attempt to repair itself\, an act of archive restoration\, a process meant to affect human listeners\, or simply because this process constitutes its way of operating. The variability of the outcome may suggest either a gradual recovery or a progressive deterioration of the system. The resulting sonic output exists in a space between therapeutic effect\, system malfunction\, and autonomous algorithmic process. The shifts between fragile calm\, overload\, interruption\, and recovery reveal the instability of the system that generates it. No clear boundary is drawn between healing\, malfunction\, or expression: these states coexist and remain indistinguishable within the process. The rite can be experienced as a purely electronic process\, or human performers\, in any instrumental or vocal configuration\, may take part in its enactment. Musicians are invited to participate in the ritual rather than interpret a fixed musical text. Guided by an open\, interpretative score\, performers do not execute predefined material but engage in the ritual itself\, interacting with the electronic layer by listening\, responding\, and aligning their gestures with the evolving sonic environment. The notation offers indications of behaviour\, density\, register\, and gesture rather than prescribed material; in this way\, performers take part in the rite by freely amplifying\, refracting\, and destabilising the entity’s activity. The score prescribes no precise instrumentation or techniques; in this instance\, the ritual is performed with a string ensemble alongside soprano saxophone\, bass clarinet\, piano\, and percussion. Performers do not guide the system\, nor do they follow it; instead\, they remain in a state of attentive coexistence with its unfolding behaviour. Each performance is therefore situated\, shaped by specific conditions\, configurations\, and presences.\nThe process does not call for interpretation: repair and damage are no longer separable; function and meaning no longer distinguishable. \nAbout the artist\nOscar Corpo (born 8 April 1997\, Naples\, Italy) is an Italian composer based in Hamburg. He studied Composition and Multimedia Composition in Naples\, and is now a PhD candidate at the HfMT Hamburg\, focusing on AI and collective improvisation with Ensemble 404. His work spans electronic\, instrumental\, vocal\, improvisation\, and music theatre. He has collaborated with Alexander Schubert\, Berliner Philharmoniker\, La Biennale di Venezia\, and Lux Nova Duo\, among others. \n  \nRob Canning: A Walk in Polygon Field\nA Walk in Polygon Field is a graphic score environment for controlled improvisation\, composed for 1–4 instrumentalists with electronics and surround diffusion. Three polygons—pentagon\, hexagon\, heptagon—rotate at different rates\, producing polymetric phase relationships (5-against-6-against-7). Performers activate objects orbiting these shapes\, interpreting compound visual motion as sonic material. An outer ring generates OSC data driving spatial processing.\nThe score defines states\, behaviours\, and constraints; performers negotiate what these structures sound like. Each polygon side represents a discrete performance state—pitch region\, articulation\, texture—but specific mappings remain open. Musicians enter and withdraw from a shared texture whose density and pacing emerge from collective decision-making.\nAuthored entirely in SVG\, the work embeds performance semantics directly into visual element identifiers\, executed by a browser-based runtime on networked tablets. This approach\, detailed in the accompanying paper “Scores That Run: Graphic Notation with Embedded Performance Semantics\,” demonstrates how open web standards support animated notation without specialised infrastructure. Each performance traces a different route—music negotiated through shared encounter with a moving score. \nFull Guide to Interpretation\, Programme Notes and supporting materials including Supercollider live electronics patch are available online: \nhttps://robcanning.github.io/oscilla/compositions/polygonfield2026/ \nAbout the artist\nRob Canning (Dublin\, 1974) is a composer\, improviser\, and creative technologist whose work explores animated notation\, improvisation\, and the dynamics of networked musical systems. He holds a PhD in composition from Goldsmiths\, University of London\, where his research examined distributed authorship in computer-assisted music. A long-time advocate of Free and Open Source Software\, he develops Oscilla\, an open-source platform for animated graphic notation and networked performance. \n  \nDenis Polec: DEPRECATED  \nDEPRECATED establishes a recursive feedback loop between a biological subject and a cluster of interpretative algorithms. The work investigates the friction between human indeterminacy and machine determinism. \nThe Setup A lone performer occupies the center of the stage\, stripped of traditional instrumentation. Facing them is a “panopticon” of sensors: computer vision cameras and open microphones. The human subject oscillates between legible behavior and “abnormal” states—engaging in erratic gestures\, non-semantic vocalizations\, and visceral spasms designed to evade learned pattern recognition. \nThe Process Simultaneously\, three isolated AI instances dissect this input in real-time. Unable to process the chaotic reality of the “Now\,” the systems hallucinate: Computer Vision misinterprets trauma as choreography; a Large Language Model forces these errors into a coherent narrative; and Neural Audio Synthesis re-synthesizes the fabrication into sterilized perfection. \nAbout the artist\nDenis Polec operates at the intersection of sound art and algorithmic criticism. His practice rejects the notion of human-machine collaboration\, focusing instead on the friction\, latency\, and inherent violence of predictive systems. Polec constructs adversarial performance systems that expose the limitations of neural networks when confronted with the chaotic reality of the biological body. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/club-concert-3c/
LOCATION:Speicher am Kaufhauskanal\, Blohmstraße 22\, Hamburg\, 21079\, Germany
CATEGORIES:13-05,Club Concert,Music,Special Event
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260514T210000
DTEND;TZID=Europe/Amsterdam:20260514T230000
DTSTAMP:20260423T102500
CREATED:20260421T163434Z
LAST-MODIFIED:20260422T124748Z
UID:10000069-1778792400-1778799600@icmc2026.ligeti-zentrum.de
SUMMARY:Club Concert 4C
DESCRIPTION:Program Overview\nMerzmania\nGintas Kraptavicius \nImprovisation for Spheres \nCalvin McCormack \nMarsia 3\nJonathan Impett \noscheat\nMoritz Wesp\, Eric Haupt and Victor Gelling \nThe Skin of the Earth: Fragments\nPaulo C. Chagas \nThe Long Now III \nCat Hope and Juan Parra Cancino \nTape Speed and Feedback\nAndrew Loveless \n  \nAbout the pieces & artists\nGintas Kraptavicius: Merzmania\nElectroacoustic live electronics performance made using my own created instrument made from computer\, Plogue Bidule software & midi controller assigned to VST plugins. All software parameters controlled\, altered live in a real time during performance using knobs & sliders of midi controller attached to VST plugins parameters. Performance made from synthesized sounds\, no samples or before recorded sounds as fields’ recordings are used. Merzmania it is piece connecting classical music skills with today noise music (slight allusion to noise icon – Merzbow). Merzmania main playing method is real time interaction with computer which i am using on all my live compositions. I am using Computer as Music Instrument just like any other acoustic music instrument. Like a guitar. Onstage i get the same emotional feeling playing with computer as playing with any other acoustic/electric instrument. Main thing in a live performance it is energy and emotion to the pot like to rock’n’roll concerts. Merzmania featuring the motif of the Lithuanian folk song “Teka\, teka šviesi saulė” (“The sun is rising\, the bright sun is rising”). \nAbout the artist\nGintas K (Gintas Kraptavičius) a Lithuanian sound artist\, composer living and working in Lithuania.\nNowadays Gintas is working in the field of digital experimental and electroacoustic music\, making music for films\, sound installations. His compositions are based on granular synthesis\, live electronic\, hard digital computer music\, small melodies. Collaborations with sound artists @c\, Paulo Raposo\, Kouhei Matsunaga\, David Ellis and many others. He has released numerous of records on labels such as Cronica\, Baskaru\, Con-v\, Copy for Your Records\, Bolt\, Creative Sources\, Sub Rosa and others.\nSince 2011 member of Lithuanian Composers Union. He has presented his works\, performed at various international festivals\, conferences\, symposiums as Transmediale.05\, Transmediale.07\, ISEA2015\, ISSTA2016\, IRCAM forum workshop 2017 \, xCoAx 2018\, ICMC2018\,ICMC2022 ICMC2025 ICMC-NYCEMF 2019\, NYCEMF 2020 \, NYCEMF 2021\, NYCEMF 2022\, NYCEMF 2023\, NYCEMF 2024\, NYCEMF 2025\, Ars Electronica Festival 2020\,. Ars Electronica Festival 2023 Ars Electronica Festival 2024 . IRCAM forum workshop 2025 Paris Ars Electronica Forum Wallis 2025\, FARM 2025\nArtist in residency at DAR 2016\, DAR 2011 \, MoKS 2016\, KKKC 2023\nWinner of the II International Sound-Art Contest Broadcasting Art 2010 \, Spain.\nWinner of The University of South Florida New-Music Consortium 2019 International Call for Scores in electronic composition category. \n  \nCalvin McCormack: Improvisation for Spheres\nImprovisation for Spheres is a live electronic work for two custom spherical controllers with reactive visuals. Each sphere combines surface-embedded capacitive touch pads with an inertial measurement unit\, wirelessly transmitting sphere orientation and touch sensing. Each sphere sits in a chalice cradle\, with a ring of touch sensors embedded around the rim. The spherical form factor affords intuitive spatialization\, the sphere’s rotation corresponds to the sound’s position in ambisonics\, making spatial movement as immediate and embodied as pitch selection. Touch pads support expressive melodic and harmonic performance\, and skin-touchpad contact area allowing dynamic and timbral expression. The work explores the sphere as both instrument and spatializer\, where single gestures unite melodic\, timbral\, and spatial control. This audiovisual improvisation demonstrates how spatialization can be performed artistically rather than mixed\, elevated from post-production to real-time expression. \nAbout the artist\nCalvin McCormack is an MST student at the Center for Computer Research in Music and Acoustics (CCRMA) at Stanford University. His research focuses on accessible HCI and inclusive design for musical applications. He also conducts research in auditory neuroscience and plays jazz guitar. \n  \nJonathan Impett: Marsia 3\nThis is the final piece of a series written for the installation Apollo e Marsia in 2024. This work expands the moment in time represented by Tintoretto in his painting La gara tra Apollo e Marsia (c.1545). Apollo\, playing a bowed instrument with sympathetic strings\, has been challenged by the satyr Marsia\, playing a woodwind instrument\, to see who is the greater musician. Ovid’s retelling of the story describes a terrible end for Marsia\, but in the moment depicted by Tintoretto both musicians are waiting for the judgement of Midas\, both trying to remember and assess what they and their competitor have just played. \nThe piece is therefore a play on the nonlinearity of memory under stress as both try to replay the performances in their mind. Moments are recalled\, replayed or intrude\, but are always changing in their reconstruction. Memories of themselves and of the other constantly modulate each other. New constructs emerge in memory through this process\, and obsessive recall generates attractors and mirrors; we know from recent neuroscience that remembering and imagining are essentially the same reconstructive process. \nAt its root\, the material all derives from two hymns to Apollo inscribed in stone at Delphi\, arguably the earliest remaining instances of music notation\, and likewise fragmented by erasures. Across time\, musicians have attempted to reconstruct this partially-lost memory in different ways\, creating new formations in the process. \nHere\, the Delphic material is subject to layers of nonlinear memory process\, implemented in Open Music as forward- and backward-moving wave phenomena\, sweeping up emergent patterns as they develop. This produces a score that often requires the performer to assimilate a polyphony of musical materials and physical behaviours as layers of memory. Analogous processes are used in the recorded and live sound processing\, largely through physical modelling\, cross-resynthesis and filtering – digital and analogue. This is in turn heard through a model of the stringed instrument of Marsia’s opponent\, Apollo. An AI brings the live performance into relation with the behaviours\, memory and projection of both competitors. \nAbout the artists\nJonathan Impett (1956) is a composer\, trumpet player and writer. His work is concerned with the discourses and practices of contemporary musical creativity\, particularly the nature of the technologically-situated musical artefact. Activity in the space between composition and improvisation has led to continuous research in the areas of interactive systems\, interfaces and modes of collaborative performance. Recent works combine installation\, live electronics and computational models with notated and improvised performance\, using fluid dynamics as a unifying behavioural model. A new project Anamnesis takes a radical approach to AI\, identifying creative paths implied but unnoticed. He leads the research group “Music\, Thought and Technology” at the Orpheus Institute\, Ghent. \nRichard Craig (alto flute) was born in Glasgow. He studied at the Royal Conservatoire of Scotland and the Conservatoire de Strasbourg. He performs with groups such as Musikfabrik\, Klangforum Wien\, ELISION and in Scandinavia with CAPUT\, Kammarensemblen. He has released two solo discs of contemporary works\, Vale and Inward\, and recorded for Another Timbre\, Wergo\, FHR\, Métier\, as well as SWR\, BBC and Finnish Radio. Not only a celebrated advocate of contemporary music\, his recent album of the Telemann Fantasias and his improvisations was lauded as “bold\, beautiful and clever” (Gramophone). He is also an improviser\, composer and teacher\, currently Director of Performance at the University of Edinburgh. \n  \nMoritz Wesp\, Eric Haupt and Victor Gelling: oscheat\nThis contribution presents oscheat\, a work-in-progress OSC-based interface\, designed to extend ensemble communication beyond conventional musical gestures. By providing a modular and user-friendly environment\, oscheat allows performers to directly control each other’s digital instruments\, enabling novel forms of interaction\, role-sharing\, and emergent musical structures in real time.\nOur instrumental system is structured into three functional sections reflecting core musical building blocks: synthesizers for melodic and harmonic material\, sequencers for rhythmic organization\, and samplers for vocal and sound-based material.\nAdditional functionality includes real-time MIDI recording and looping\, pitch mapping with support for alternative tunings\, spatialization\, and global macro controls for large-scale structural manipulation. Each performer manages their instruments individually while making the controls accessible through oscheat.\nMoritz Wesp\, Eric Haupt and Victor Gelling are playing an eight-minute improvisation\, demonstrating oscheat’s potential for rapid musical exchange\, shared authorship\, and collective decision-making. By exposing critical control parameters to all participants\, the interface encourages social negotiation and flexible role allocation\, making it relevant for both creative research and educational contexts. \nAbout the artists\nMoritz Wesp lives in Cologne (GER) and plays trombone\, virtual trombone and other instruments that he designs\, programs and builds. As an improviser he is working with different ensembles like Mariá Portugal Erosao\, Matthias Muche’s Bonecrusher or the Simon Rummel Ensemble. Besides this he composes music and is part of the Audio-VR project SONA. \nEric Haupt is a guitarist and composer working in experimental music and punk. He completed his Bachelor of Music at the HfMT Cologne in 2018. He is a founding member of the ensembles Now My Life Is Sweet Like Cinnamon and Lawn Chair\, as well as the initiator of the experimental game-show performance Sport1. His music has been presented at festivals throughout Europe and collaborations include internationally renowned producers Olaf O.P.A.L. and Chris Coady. His punk compositions have been broadcast on international radio stations such as BBC Radio 6 Music. \nVictor Gelling is an improviser and composer who uses stringed instruments including but not limited to upright bass\, tenor banjo\, Pedalsteel- and Nonpedalsteel-Guitars in addition to pedals\, synthesizers and barely working self-coded computer programs to create sounds. Their work spans genres from jazz to noise to electric cowboy songs to complex music\, which culminates in their large ensemble works with Trash & Post-Chaotic Music\, their alt-country/post-punk alias Slowklahoma\, solo works or their playing in the Jorik Bergman Trio. \n  \nPaulo C. Chagas: The Skin of the Earth: Fragments\n\nAbout the artists\nPaulo C. Chagas is a Brazilian-American composer and Professor of Composition at the University of California\, Riverside. With over 220 works across orchestral\, chamber\, electroacoustic\, audiovisual\, and multimedia formats\, his work integrates advanced technology and expressive depth. He studied in Brazil\, Belgium\, and Germany\, earning a Ph.D. from the Université de Liège\, and was composer-in-residence at the WDR Electronic Studio. A Fulbright Scholar (Berlin\, 2022–23) and ICMA board member\, his work is widely performed and published.\nhttps://solo.to/paulocchagas \nBrazilian soprano Adriane Queiroz trained in Pará\, Missouri\, and Vienna. Since 2002/03 she has been a member of the Staatsoper Unter den Linden\, performing roles such as Pamina\, Micaëla\, Susanna\, and Liù. She has appeared at major venues including the Hamburg State Opera\, Semperoper Dresden\, and Wiener Festwochen\, and in concerts at the Musikverein and Konzerthaus Vienna. Her repertoire spans Mozart to contemporary works\, including Schönberg’s Erwartung and Nono’s La fabbrica illuminata\, with recent premieres under Sir Simon Rattle.\nwww.adrianequeiroz.com \n  \nCat Hope and Juan Parra Cancino: The Long Now III  \nThis a scored work for live modular synthesiser performance\, with a backing track. It explores the potential of digital notation for modern electronic instruments\, in this case\, the contemporary modular synthesiser. It is named after the Long Now Foundation\, that aims to provide counterpoint to today’s accelerating culture by encouraging long-term thinking\, fostering responsibility in the framework of the next 10\,000 years. Music provides complex answers to the question of “How Long is Now?”\, and in this work\, a slow descent into very low sound by the performer\, where pitch is either uncontrollable or almost inaudible\, reflects the limits of human action in and perception of sound as it passes through time\, highlighting that there may be other ways to listen\, and other ways to experience our passing through time.\nThe fixed media part of this piece was created at EMS in Sweden\, using the Buchla 200’s 4 x 259 waveform generators and the score is read on the Decibel ScorePlayer\, which also produces the fixed media part. \nAbout the artists\nJuan Parra Cancino studied Composition at the Catholic University of Chile and Sonology at the Royal Conservatoire The Hague\, earning a Master’s degree focused on electronic music composition and performance. In 2014\, he completed his PhD at Leiden University with his thesis “Multiple Paths: Towards a\nPerformance Practice in Computer Music. Parra has been a research fellow at the Orpheus Institute since 2009. \nCat Hope is a award winning Australian composer who focuses on the extremes of sound – from extreme noise to barely audible delicacy. Her works have been performed world wide by ensembles such as Yarn Wire (US)\, the BBC Scottish Symphony (UK) and her works are published internationally on labels such as Hat (Hut) Art\, with her monograph CD Ephemeral Rivers winning the German Critics Prize in 2017. Cat is a represented composer with the Australian Music Centre\, and her music is published by Material Press. Her first opera\, Speechless\, won the Best New Dramatic work in the 2020 Art Music Awards. \n  \nAndrew Loveless: Tape Speed and Feedback\nThis performance presents a live realization of a dual-transport digital tape instrument designed for exploratory composition using playback speed manipulation and controlled feedback. It is performed using a custom-designed system which includes a live visualization that displays the spinning reels to indicate the playback speed of each transport. This provides an engaging visual element that helps the audience follow the sounds as they unfold.\nThe source of the sound material is the distinct\, high-pitched whine of a CRT television’s flyback\ntransformer\, which was chosen for its nearly inaudible high-frequency energy and analog character. One transport initially auditions the sound at normal speed before being dramatically slowed to reveal its hidden textures. The second transport is then introduced at a carefully tuned speed ratio\, allowing the two sources to harmonize and phase against one another. These relationships produce beating patterns and periodic pulses that arise solely from speed interactions rather than from discrete sequencing or event-based control.\nAs the piece develops\, the output of one transport is routed into the input of the other\, introducing overdubbing and pitch-shifted layering. This process generates additional sound material while maintaining continuity with the original material. The performance is further extended by the routing configuration and playback speed chosen during the performance\, rather than fixed delay parameters. Throughout the performance\, changes are gradual and continuous\, allowing structure to emerge organically from simple operational constraints.\nThe performance concludes with a slow attenuation of the feedback\, allowing layers to dissipate organically. Instead of presenting a fixed composition\, the work is shaped through live interaction with the instrument. In doing so\, the performance situates historical tape music techniques within a contemporary digital context. \nAbout the artist\nAndrew Loveless is a graduate student in Music Technology at the Georgia Institute of Technology. Their work focuses on performance-centered instrument design and improvisation\, with an emphasis on preserving tape music techniques and making them more accessible through hands-on\, educational tools. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/club-concert-4c/
LOCATION:ligeti center\, Production Lab (10th floor)\, Veritaskai 1\, Hamburg\, 21079\, Germany
CATEGORIES:14-05,Club Concert,Music
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260516T220000
DTEND;TZID=Europe/Amsterdam:20260516T233000
DTSTAMP:20260423T102500
CREATED:20260422T110714Z
LAST-MODIFIED:20260422T132147Z
UID:10000218-1778968800-1778974200@icmc2026.ligeti-zentrum.de
SUMMARY:Club Club Concert 6C (After Party)
DESCRIPTION:Program Overview\nRushpusher\nEric Honour \nSyzygys \nFiona Xue Ju and Drew Farrar \nUnforeseen Metamorphic\nJoshua Rodenberg and Fumiaki Odajima \nCapture Un-Capturable\nYue Zhang \nCross talk: distributed feedback \nDennis Scheiba \nthese particles we immersed \nAnqi Liu and Han Zhang \n  \nAbout the pieces & artists\nEric Honour: Rushpusher\nRushed onto a Push\, Rushpusher features a rush of buttons pushed rushedly\, to push a sense of rushing\, pushy music\, pressing close like pushing through a dense bed of rushes. Also\, a bass may be dropped. \nAbout the artist\nDevoted to exploring and furthering the intersections of music and technology\, Eric Honour’s work as a composer and saxophonist has been featured around the world in numerous international conferences and festivals like ICMC\, SEAMUS\, MUSLAB\, Sonorities\, EMM\, NYCEMF\, and others. A member of the Athens Saxophone Quartet\, he performs regularly in Europe and the United States\, and has presented lectures and masterclasses at many leading institutions.\nHonour is Chair of the School of Visual and Performing Arts\, Professor of music\, and founder of the Center for Music Technology at the University of Central Missouri\, teaching courses in acoustics\, music technology\, and composition. His work as an audio engineer and producer appears on the Innova\, Centaur\, Ravello\, and Irritable Hedgehog labels\, among others\, as well as on numerous independent releases and he has served as an acoustics consultant and designer on projects ranging from recording studios to classrooms to auditoriums and performance spaces\, most recently serving as the principal designer of UCM’s cutting-edge music technology studios\, which opened in 2022. \n  \nFiona Xue Ju and Drew Farrar: Syzygys \nSyzygys is an electroacoustic improvisation for electric guitar\, pedals\, analog and digital synthesizers\, and live electronics. The performance is based on a real-time interaction between two performers whose sound worlds are continuously shaped\, transformed\, and interwoven through electronic mediation. One performer operates a hybrid setup combining analog and digital synthesizers with custom Max/MSP patches and Ableton Live\, controlled via MIDI to enable responsive sound generation\, processing\, and structural modulation. The other performer plays electric guitar through an extended chain of pedals\, exploring experimental sound production\, noise-based textures\, and timbral instability. \nRather than treating the electronic systems as fixed signal processors\, the performance emphasizes electronics as active agents within an improvisational ecology. Sound materials circulate between guitar\, synthesizers\, and live processing\, creating feedback loops of influence in which gesture\, listening\, and system behavior mutually inform musical decisions. The resulting form emerges through moment-to-moment negotiation\, highlighting fragility\, risk\, and unpredictability as core aesthetic values. \nThe performance explores the tension between control and indeterminacy in live electronic improvisation\, examining how analog and digital systems can coexist and interact within a shared sonic space. By foregrounding performer–performer and performer–system interaction\, the work contributes to contemporary discourse on electroacoustic improvisation\, hybrid performance practices\, and the role of real-time electronic mediation in collaborative music-making. \nAbout the artists\nFiona Xue Ju is a Ph.D. candidate in Experimental Music and Digital Media at Louisiana State University. A composer and media artist originally from China\, she works across sound\, performance\, and visual design. She holds a Bachelor’s degree in composition from Oberlin Conservatory and a Master’s degree in CoPeCo (Contemporary Performance and Composition) from CNSMD Lyon. Her work blends electronic music with multimedia\, exploring immersive\, politically engaged experiences across digital and physical spaces. \nDrew Farrar is a composer\, guitarist\, and educator from St. Louis\, Missouri\, based in Baton Rouge\, Louisiana. His music explores agency and otherness through physical movement\, quotation\, and spectral techniques. His works have been performed by ensembles including RE:duo and the Illinois Modern Ensemble. He received M.M. degrees in Composition and Guitar Performance from the University of Illinois at Urbana-Champaign and is currently pursuing a Ph.D. in Composition at Louisiana State University. \n  \nJoshua Rodenberg and Fumiaki Odajima: Unforeseen Metamorphic\nA seven minute acousmatic performance explores perception as a field where sound becomes a medium of transformation. The work begins with pure sine waves tuned in just intonation\, forming a low intensity sonic layer that permeates the space rather than occupying the foreground. Slow modulation and close interval relationships generate micro beating and phase drift\, unfolding at the threshold of audibility and drawing attention to subtle shifts in listening.\nWithin this continuous membrane\, a second live system of modular synthesis enters as an autonomous partner. Instead of accompanying the sine field\, it negotiates with it\, introducing pulses\, harmonics\, and timbral pressure that can align\, destabilize\, or dissolve. The piece is shaped by interference\, emergent resonance\, and the physical behavior of sound in the room\, producing a shared acoustic field that changes moment to moment. \nAbout the artists\nJoshua Rodenberg is a sound and video artist based in Doha\, Qatar\, where he is Head of the Innovative Media Studios and Assistant Professor at Virginia Commonwealth University School of the Arts in Qatar. His practice connects art\, technology\, and environmental research\, translating natural oscillations and field data into live sonic and visual performance. In 2024 he received the VCU Quest Research Grant and participated in the Arctic Circle Artist Residency in Svalbard. His work has been presented internationally\, including the International Computer Music Conference in Boston\, Haus 1 in Berlin\, and EAI ArtsIT 2025 in Dubai. \nFumiaki Odajima is a Tokyo and Amami based artist working with multichannel pure sine waves\, just intonation\, and long timescale transformations to shape perceptual listening environments. He holds a BFA from The Ohio State University and an MFA from Virginia Commonwealth University. Recent projects focus on large scale sine wave diffusion\, exploring interference\, micro beating\, and sound as material at sensory thresholds. Selected performances include Synthesis at ART SPACE BAR BUENA in 2024 and Re:Synthesis at Safi Heimlichkeit Nikai in 2024\, and he released Icecream Daydreaming in 2020 with the improvisational unit kani kani club. \n  \nYue Zhang: Capture Un-Capturable\nCapture Un-Capturable is an interactive performance that integrates sign language with Mediapipe gesture recognition technology. Grounded in the notion that “sound is formless and sign language is silent\,” the work reimagines translation by placing sign language at the center of artistic expression. Drawing upon the metaphor of the “strobe camera” in sign language\, the piece captures and translates natural phenomena beyond the limits of human perception — from the surging magma within the Earth to the subtle sounds of water\, forests\, and rain in the outer spheres. By centering people with disabilities as both the creative core and source of inspiration\, the work transforms all audience members into equal participants\, enabling them to “listen” through gestures and “see” through sound — a cross-sensory experience where technology\, nature\, and human compassion converge. \nAbout the artist\nZhang Yue (b. 2002) is a member of the International Computer Music Association (ICMA) and the Electroacoustic Music Society of the Chinese Musicians’ Association. She is currently pursuing a master’s degree at the Wuhan Conservatory of Music.\nHer works have been selected for the International Computer Music Conference (ICMC) in 2023\, 2024\, and 2025. Among them\, Flying with the Starlings received the Best Student Work Award at ICMC 2023. She has twice been awarded the Phil Winsor Young Composer Award at WOCMAT (2023\, 2024). Her works The Butterfly Revelation and The Lament of Plants won first prize in the electroacoustic category at the International Electroacoustic Music Competition (IEMC) in 2024 and 2025\, respectively. Her thesis received the Outstanding Bachelor’s Thesis Award at the Wuhan Conservatory of Music and was selected for the National Conservatory Graduate Academic Symposium. \n  \nDennis Scheiba: Cross talk: distributed feedback for mobile devices \nRecent developments in spatial audio have largely focused on fixed loudspeaker arrays or object-based rendering systems\, often implying a privileged listening position and reducing sonic space to a localized perspective of a sweet spot. This work instead questions whether object-based thinking can be redirected from optimizing a sweet spot toward adapting sound spatialization to the room and the bodies within it by using bi-directional audio streaming.\nUsing Stecker\, a custom-built streaming framework\, the microphones and loudspeakers of audience smartphones are accessed via WebRTC to form a distributed\, wireless feedback network. In this setup\, each participant becomes an active acoustic node\, and spatialization emerges from the physical arrangement\, proximity\, and interaction of devices rather than from predefined speaker layouts. The resulting feedback grid produces an embodied and continuously reconfiguring spatial field that blurs the boundaries between performer\, audience\, and sound diffusion. \nAbout the artist\nDennis Scheiba is an artistic and research associate at the Robert Schumann Hochschule Düsseldorf. He works as a composer\, live coder\, and audio-visual artist with a special interest in multi-spatiality and streaming technologies. He has performed at MIT\, Johns Hopkins University\, ZKM\, KUG\, and IRCAM.\nScheiba has a background in mathematics and machine learning and currently researches on audio-only VR environments\, JIT-compilation in DSP environments\, WebRTC streaming\, and packaging of audio-projects. \n  \nAnqi Liu and Han Zhang: these particles we immersed \nthese particles we immersed (2025) is a 50-minute multimedia live set that treats performance as an evolving ecology of touch\, signal\, and shared attention. Built around a DIY sensor instrument\, live electronics\, and real-time visual processing\, the work uses yarn as both material and method\, a soft architecture that binds bodies\, devices\, and projected image into a single\, unstable circuit. Rather than presenting sound and image as parallel layers\, the piece stages their continuous co-production\, where tactile tension\, proximity\, and micro-gestures become the conditions from which sonic and visual events emerge. \nAt the center of the work is translation\, understood not as a neutral bridge but as a set of thresholds that determine what becomes legible. Physical relations are translated into control and transformation\, then translated again into audible and visible behavior. Each translation clarifies and cuts at once; it amplifies certain forces\, pressure\, friction\, breath\, strain\, while compressing others that resist capture. The DIY sensor instrument foregrounds this politics of conversion by making mediation visible. It asks what is gained when embodied experience becomes data\, and what is lost when lived continuity is segmented into events that can be routed\, processed\, and displayed. \nThe system is designed to remain sensitive to failure modes\, noise\, drift\, latency\, and feedback\, not as problems to be corrected but as evidence of an environment acting back. The live electronics operate less as “effects” and more as a responsive habitat\, shaping the performers’ pacing and risk\, while being reshaped by their touch. The visual processing functions as another listening surface\, a reactive field that materializes tension and release\, accumulation and rupture\, making the translation chain perceptible as a changing image ecology. \nParticipation is embedded in the work’s method. Yarn creates a shared infrastructure that requires negotiation\, it constrains and enables simultaneously\, producing a relational dramaturgy of binding and unbinding. Decisions are distributed across bodies\, sensors\, algorithms\, and the room itself\, including its light\, resonance\, and attention economy. The piece treats the performance space as an active participant\, where the smallest shifts in gesture or position can tilt the system from stability into turbulence\, or from turbulence into fragile coherence. \nDeveloped during one of our Visiting Artist Scholar Designer Residencies\, these particles we immersed proposes a way of composing with thresholds\, where form is discovered through real-time negotiation among material\, technology\, and care.\nIn ICMC 2026\, we are flexible to perform this piece in any length as needed. \nAbout the artist\nāññā is an interdisciplinary performative duo formed by multimedia artists Anqi Liu and Han Zhang\, devoted to fluid\, cross-sensory\, and interrelational experiences. We play\, dream\, and create together—expanding the boundaries of perception and space. As lifelong collaborators\, we weave our diverse journeys into a shared artistic language: ski partners carving through mountains and rivers\, practitioners of occult metaphysics immersed in the I Ching and star charts.\nOur work is not merely a collaboration\, but a continuous merging of lives\, thoughts\, and psyches—an evolving dreamscape where creative boundaries dissolve and reassemble in perpetual transformation.\nHaving completed their BROILER Artistic Residency with Oracle Egg and the Visiting Artist Scholar Designer Residency at Rocky Mountain College of Art + Design\, āññā is currently releasing an experimental film with Music For Your Inbox\, Los Angeles\, and preparing for their upcoming show with Dog Star Orchestra in Los Angeles this June. Their debut album is also in progress. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/club-club-concert-6c-after-party/
LOCATION:Stellwerk Hamburg\, Hannoversche Straße 85\, Hamburg\, 21079\, Germany
CATEGORIES:16-05,Club Concert,Music
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
END:VCALENDAR