BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//ICMC HAMBURG 2026 - ECPv6.15.20//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:ICMC HAMBURG 2026
X-ORIGINAL-URL:http://icmc2026.ligeti-zentrum.de
X-WR-CALDESC:Events for ICMC HAMBURG 2026
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:Europe/Amsterdam
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20250330T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20251026T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20260329T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20261025T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20270328T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20271031T010000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260511T090000
DTEND;TZID=Europe/Amsterdam:20260511T103000
DTSTAMP:20260428T185011
CREATED:20260422T142107Z
LAST-MODIFIED:20260427T154807Z
UID:10000221-1778490000-1778495400@icmc2026.ligeti-zentrum.de
SUMMARY:Paper Session 1a: History of Computer Music
DESCRIPTION:Three papers will be presented and discussed: \n  \nHyunmook Lim: “The History of Japanese Electroacoustic Music for Piano from the Perspective of Media Genealogy”\nThis paper examines the history of compositions for piano and electronics in Japan through the lens of media genealogy. While the development of modern Japanese electronic music emerged nearly in parallel with its European counterparts\, it has often been perceived as lacking a distinctive trend or unified stylistic coherence\, unlike the established traditions of France’s Musique concrète or Germany’s Elektronische Musik. To address this\, the author categorizes the historically inconsistent\ntrajectory of Japanese electronic music by focusing on works for piano and electronics\, tracing the genealogy of specific media that have emerged within the Japanese context. In response to the ICMC2026 theme\, “Innovation\, Translation\, Participation\,” this study provides a detailed analysis of technological innovation through media genealogies\, offers a new translation of this historical narrative\, and explores the processes of artistic participation that have shaped Japan’s electronic music history. \nPaulo C. Chagas: “Beyond Execution: Unrealizability and the Ontology of Sound in Computer Music”\nThis paper proposes an ontological reorientation of computer music grounded in the concept of unrealizability. Drawing on Giorgio Agamben’s notion of potentiality without act\, it argues that dominant paradigms of electroacoustic and computer music have historically privileged realization\, execution\, and operability as the primary conditions of sonic being. From early studio practices at the GRM and WDR to the consolidation of computer music as an executable\, code-based discipline\, sound has largely been understood as something that exists in order to be realized. Against this background\, the paper proposes to examine a series of practices that destabilize the primacy of execution. Practices such as granular synthesis\, live electronic and interactive systems\, and machine-learning-based processes foreground forms of sonic potentiality that cannot be fully individuated\, predicted\, or exhausted by realization\, thereby suggesting unrealizability not as a limitation but as\na constitutive dimension of contemporary computer music. By framing sound as a field of suspended potential rather than a command to be executed\, the paper advances an alternative ontology in which listening becomes a mode of use rather than consumption. This perspective invites a reconsideration of compositional agency\, technological apparatuses\, and the political implications of sound practices beyond execution\, emphasizing openness\, contingency\, and inoperativity as critical resources for computer music today.\nAndrea Agostini: “Computer-Aided Composition: A Retrospective and Prospective Outlook”\nComputer-aided composition was established as an autonomous discipline\, distinct from the seemingly more general concept of computer composition\, in the 1980s. Since then\, it has prompted the development of dedicated software tools and specific compositional practices and attitudes. In spite of this\, a definition of what computer-aided composition actually is and\, subsequently\, a retrospective outlook on its past evolution and a prospective one on its possible futures has seldom if ever been attempted. Also\, while development and adoption of new tools has been uninterrupted through the decades\, theoretical reflection was especially thriving until the late 1990s or early 2000s\, and has lost vitality since. In this article\, we shall examine past literature in order to trace a historical overview of the term\, implicitly outlining a tentative definition of it and following through the most significant developments of computer-aided composition and its associated toolsets; attempt a necessarily partial overview of how it is practically understood and adopted today; and sketch a personal and incomplete wishlist of what the term could come to mean in some desirable future. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/paper-session-1-history-of-computer-music/
LOCATION:Hamburg University of Technology\, Building H\, Audimax 1\, Am Schwarzenberg-Campus 5\, Hamburg\, 21073\, Germany
CATEGORIES:11-05,Paper Session,Session
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260511T100000
DTEND;TZID=Europe/Amsterdam:20260511T183000
DTSTAMP:20260428T185011
CREATED:20260415T092539Z
LAST-MODIFIED:20260417T114151Z
UID:10000111-1778493600-1778524200@icmc2026.ligeti-zentrum.de
SUMMARY:[Off-ICMC] Festival Opening
DESCRIPTION:  \nSounds with Fruits and Vegetables | Photo: feeljazz Festival Jakob Stolz\n\nWe’re kicking off the festival week at Hölertwiete near the Harburg Rathaus S-Bahn station. The Harburg Info center\, run by Harburg Marketing\, will serve as the festival hub for three days. Check out the week’s program\, use our program finder to discover events you might enjoy\, look forward to interactive sound installations\, and explore virtual realities.  \nNo registration required \n  \nThe Off-ICMC\nMusic is what brings us together\, even when everything else pulls us apart.\nMusic everywhere—it is part of our everyday lives. And yet\, we’re hearing it performed live on analog instruments less and less. Instead\, it often reaches us through speakers or headphones\, as files\, from the cloud. What does music mean to you? What does it sound like today? Where does it begin—and where does it end?\nThe ligeti center invites you to listen more closely and discover new sounds—to explore\, experiment\, and play. This year\, ICMC HAMBURG 2026 revives an old tradition: the Off-ICMC\, a free and accompanying festival curated for the general public and anyone curious about computer music. \nAll Off-ICMC events are free of charge.  \n\n  \n \n 
URL:http://icmc2026.ligeti-zentrum.de/event/off-icmc-festival-opening/
LOCATION:Harburg Info\, Hölertwiete 6\, Hamburg\, 21073\, Germany
CATEGORIES:11-05,Off-ICMC
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260511T110000
DTEND;TZID=Europe/Amsterdam:20260511T123000
DTSTAMP:20260428T185011
CREATED:20260415T131036Z
LAST-MODIFIED:20260427T154901Z
UID:10000128-1778497200-1778502600@icmc2026.ligeti-zentrum.de
SUMMARY:Paper Session 2a: Music Information Retrieval
DESCRIPTION:Three papers will be presented and discussed:\n  \nAxel Berndt\, Aida Amiryan-Stein\, Manuel Peters\, Meinard Müller and Stefan Balke\, “ChoraleWind: An Expressive Wind-Quartet Dataset for End-to-End Rendering from the Neues Thüringer Choralbuch”\nWe introduce ChoraleWind\, a dataset along with a framework for a reproducible end-to-end rendering from the Neues Thüringer Choralbuch (NTCB). The dataset comprises 311 four-part chorales and covers the full pipeline from symbolic score encoding to performance-level rendition and synthesized audio. ChoraleWind includes a rule-based performance model that generates expressive timing\, dynamics\, and articulation\, including metric and structural accents as well as phrase-end gestures from high-quality MEI encoding of the NTCB chorales\, combined with a wind-instrument synthesis based on physical modeling that produces isolated stems and ensemble mixes. The dataset provides aligned symbolic representations\, performance annotations\, and multitrack audio\, enabling systematic training and evaluation of score-to-audio wind-quartet rendering methods under fully controlled conditions. Rather than aiming at state-of-the-art purely data-driven synthesis\, ChoraleWind is designed as a transparent and reproducible testbed for studying expressive performance generation\, timbre modeling\, and evaluation of wind-quartet rendering systems.\n\nMário Pereira\, António Sá Pinto\, Treasa Harkin and Gilberto Bernardes\, “Computational Analysis of Expressive Tempo in Irish Traditional Dance Music”\n\nThis paper presents a computational study of expressive tempo in Irish traditional dance music\, analysing 136 annotated performances of reels and jigs. Using beat-level tempo calculation\, predominant-tempo estimation\, and deviation-curve analysis\, we examine how timing varies across tune types\, performance settings\, and musical structure. Results show that expressive deviations are generally subtle: reels display a mild deceleration tendency\, jigs remain highly tempo-stable\,\nand solo–ensemble and instrument-specific differences are minimal. Phrase-level clustering reveals three characteristic deviation profiles\, with strong acceleration occurring only in opening phrases\, reflecting common slow-start performance practices. These findings provide\, to the best of our knowledge\, the first systematic quantitative characterisation of expressive timing in this tradition and highlight how micro-variations emerge from stylistic\, technical\, and interpretive factors while maintaining overall temporal stability.\nGilberto Bernardes\, Nádia Moura and António Sá Pinto\, “Perpetual Dialogues: A Computational Analysis of Voice–Guitar Interaction in Carlos Paredes’s Discography”\nComputational musicology enables systematic analysis of performative and structural traits in recorded music\, yet existing approaches remain largely tailored to notated\, score-based repertoires. This study advances a methodology for analyzing voice–guitar interaction in Carlos Paredes’s vocal collaborations—an oral-tradition context where compositional and performative layers co-emerge.\nUsing source-separated stems\, physics-informed harmonic modeling\, and beat-level audio descriptors\, we examine melodic\, harmonic\, and rhythmic relationships across eight recordings with four singers. Our commonality–diversity framework\, combining multi-scale correlation analysis with residual-based detection of structural deviations\, reveals that expressive coordination is predominantly piece-specific rather than corpus-wide. Diversity events systematically align with formal boundaries and textural shifts\, demonstrating that the proposed approach can identify musically salient reorganizations with minimal human annotation. The framework further offers a generalizable computational strategy for repertoires without notated blueprints\, extending Music Performance Analysis into oral-tradition and improvisation-inflected practices. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/paper-session-2-music-information-retrieval/
LOCATION:Hamburg University of Technology\, Building H\, Ditze Hörsaal (H 0.16)\, Am Schwarzenberg-Campus 5\, Hamburg\, 21073\, Germany
CATEGORIES:11-05,Paper Session,Session
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260511T110000
DTEND;TZID=Europe/Amsterdam:20260511T123000
DTSTAMP:20260428T185011
CREATED:20260422T142327Z
LAST-MODIFIED:20260427T155104Z
UID:10000222-1778497200-1778502600@icmc2026.ligeti-zentrum.de
SUMMARY:Paper Session 2b: AI & Music
DESCRIPTION:Three papers will be presented and discussed: \n  \nHiroshi Yamato\, OrbitScore: “A Domain-Specific Language for Polymetric Live Coding Based on Multilayered Temporal Structures”\nThis paper presents OrbitScore\, a domain-specific language (DSL) for live coding polymetric rhythm patterns based on the theory of Multilayered Temporal Structures (MLTS). While existing live coding languages such as TidalCycles and Sonic Pi provide rich pattern manipulation capabilities including polyrhythmic support\, OrbitScore offers an intuitive syntax where the beat(n by m) notation directly\nmaps to the theoretical 4:(n/4) framework\, enabling each sequence to maintain its own meter and allowing performers to create intricate polyrhythmic textures in real-time. The system integrates with SuperCollider for low-latency audio synthesis and provides a declarative\, method-chaining syntax designed for live performance. We describe the theoretical foundation\, DSL design\, implementation architecture\, and demonstrate the system’s capabilities through a live coding performance. Our contribution lies in bridging the gap between the theoretical framework of Multilayered Temporal Structures and practical live coding tools\, making polymetric expressions accessible to performers. \nYuan Zhang and Xinran Zhang\, “Hexagram-Based Semantic Composition: Discretizing Embedding Spaces into Symbolic Compositional States for Improvised Performance”\nDiffusion-based text-to-audio (TTA) systems such as Udio have introduced a mode of musical making in which linguistic prompts activate high-dimensional latent manifolds to yield contingent\, non-repeatable sonic artefacts. This generative architecture—operating through interse-\nmiotic translation between linguistic signs and high-dimensional latent space—produces distinctive aesthetic conditions that have yet to be adequately theorized. This paper introduces latent music as an emergent aesthetic form produced through generative text-to-audio systems such as Udio. Latent music arises from processes of interpolation\, recombination\, and associative drift within\nhigh-dimensional latent spaces—existing in states of perpetual becoming characterized by gradient identities\, interreferential drift\, asignifying ruptures\, and ontological indeterminacy. These emergent sonic forms occupy interstitial spaces between recognizable musical signs\, resisting categorical stability while revealing distinctive possibilities for sonic expression. The result is a field of sonic objects marked by spectrality\, liminality\, and cross-material entanglement—sounds that hover between genres\, gestures\, and perceptual thresholds. Drawing on Deleuzian aesthetics\, philosophy\, and an extensive corpus of prompt-generated sonic artifacts\, the paper situates these emergent forms as products of asignifying rupture and aesthetic drift\, where sonic identities dissolve and recombine in unstable assemblage determined by intersemiotic translation between linguistic prompts and audio materiality. This research offers a theoretical framework and critical vocabulary for engaging with these uncanny sonic entities\, proposing that latent music invites listening practices attuned to indeterminacy\, associative resonance\, and the productive tensions of the not-yet-formed. \nColton Arnold\, Zhaohan Cheng and Ajay Kapur: “AI Framework for Dynamic Robotic Instrument Calibration”\nThis paper presents a data-driven calibration framework for robotic musical instruments based on a hybrid ensemble model that combines K-nearest neighbors (KNN) and a multi-layer perceptron (MLP). KNN anchors predictions to recorded acoustic measurements\, while the MLP enables nonlinear generalization and smooth interpolation across the instrument’s playable range. A distance-dependent blending strategy integrates the two models\, improving consistency across sparse and dense data. The proposed approach produces stable and repeatable calibration estimates for both pitched and non-pitched instruments\, outperforming standalone models across a range of sampling conditions. This work establishes a scalable foundation for automated calibration in robotic musical systems. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/paper-session-2b-ai-music/
LOCATION:Hamburg University of Technology\, Building H\, Ditze Hörsaal (H 0.16)\, Am Schwarzenberg-Campus 5\, Hamburg\, 21073\, Germany
CATEGORIES:11-05,Paper Session,Session
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260511T110000
DTEND;TZID=Europe/Amsterdam:20260511T173000
DTSTAMP:20260428T185011
CREATED:20260421T181209Z
LAST-MODIFIED:20260428T104809Z
UID:10000184-1778497200-1778520600@icmc2026.ligeti-zentrum.de
SUMMARY:Listening Room 2
DESCRIPTION:Fixed Media: Program Overview\n430-+\nAyako Sato \nFMVP!\nGuanjun Qin \nLunar Current\nChufan Zhang\, Jun Wang and Qi Liu \nSawa\nAkiko Hatakeyama \nSuwol for Tape\nSeongah Shin \nTake Me Back to Indonesia\nBoyi Bai \nVentward\nEd Osborn \nWoody\nAdrian Kleinlosen \nZen to Hearth\nYu Linke \n  \nAbout the pieces & artists\nAyako Sato: 430-+\nThe fundamental pitch of the 15th bamboo tube of the Shō\, “kotsu\,” corresponds to the current standard pitch of 430Hz in Gagaku. This acousmatic piece involves listening to 430Hz\, its harmonics\, the sounds that deviate from it\, and unreliable text about the Shō generated by AI. Perhaps. \nSho performance: DEGUCHI Miki \nAbout the artist\nAyako Sato is a composer\, musician\, artist\, and researcher working mainly in the field of electroacoustic music. Her works have been presented at international conferences and festivals (ICMC\, SMC\, NYCEMF\, ISMIR\, WOCMAT\, etc.) and won awards in international competitions (Prix Presque Rien\, Destellos Competition\, International UPISketch Competition\, etc.). She received her Ph.D. from Tokyo University of the Arts in 2019 for her research on Luc Ferrari’s works. After working as a part-time lecturer at Tamagawa University\, Osaka University of Arts\, Tokyo Denki University\, and Shobi Music College\, she is a lecturer at Shizuoka University of Art and Culture starting April 2025. \n  \nGuanjun Qin: FMVP!\nFMVP! is an electroacoustic composition built entirely from the sampled sounds of basketball — the bounce\, the squeak of shoes\, the swish of the net\, and the roar of the crowd. Through sound transformation and spatial movement\, the piece narrates the emotional journey of an athlete: from doubt and criticism to determination\, and finally to victory. Dedicated to basketball legend Stephen Curry\, FMVP captures the rhythm\, intensity\, and inner monologue of a player striving to redefine limits. Each percussive impact becomes a heartbeat; each layered resonance a moment of resilience. The composition explores how athletic struggle and artistic creation share the same pulse — persistence\, precision\, and belief. \nAbout the artist\nChampion (Guanjun) Qin is an award-winning composer\, producer\, and topliner\, currently pursuing a PhD in Music Composition at the University of Bristol\, fully funded by the China Scholarship Council (CSC). His works have been performed\, awarded\, or officially selected at major international music and sound art festivals\, including the Denny Awards (USA & China)\, YoungLione*ss Festival (Italy)\, Futura Festival (France)\, and the International Computer Music Conference (ICMC). Champion’s creative practice bridges electroacoustic composition and popular music production\, exploring the intersection of sound design\, cross-cultural aesthetics\, and narrative expression. He has collaborated with and composed music for renowned artists such as Jackson Wang\, a member of GOT7\, one of Asia’s most influential K-pop groups. His production work also extends to film and television\, including the acclaimed animated series GG BOND\, which drew over 50 million viewers in its first week of broadcast. \n  \nChufan Zhang\, Jun Wang and Qi Liu: Lunar Current\n“The ripples of moonlight surge and finally settle into stillness in the current. The trembling of electronic waves all find their peaceful end in the moonlit night.” – The pulses of electronic sound eventually merge into the gentle waves of moonlight\, just as the surges of electric current fade into the breath of the night. This work takes electronic waveforms simulating electric current as its core sound material. Through modulation and filtering processing in a digital audio workstation\, it employs techniques such as synthesizer wave shaping\, ambient reverb stacking\, and low-frequency oscillation to create auditory characteristics that blend the texture of electric current with the haziness of a moonlit night. Lunar Current is an immersive auditory experience. It attempts to capture not the moonlight itself\, but the sensory critical state where the quiet night and electronic current intertwine. At this moment\, the technological rhythms of electronic sound and the ethereal silence of the moonlit night together construct a gentle echo of a whispered conversation with the starry night. \nAbout the artists\nChufan Zhang (born in July 2006) is a sophomore at the Communication University of Zhejiang\, and also a young creator who delves into the fields of creative design and blockchain applications. Her representative works include Xuan and Mo Zang. Among them\, Xuan won the second prize in the East China Division of the National University Students Blockchain Competition\, and Mo Zang was awarded the third prize in the Future Designer Competition. During her studies at the university\, she not only won the first-class scholarship of the university but also was awarded the titles of “Merit Student” and “Outstanding Social Worker”\, demonstrating solid professional skills and cutting-edge innovative thinking in both academic research and competition practice. \nJun Wang  \nQi Liu \n  \nAkiko Hatakeyama: Sawa\nIt’s neither close nor far\, neither happened nor never happened. This is a short piano-and-electronics piece that captures a moment in an unfamiliar place. \nAbout the artist\nAkiko Hatakeyama is a composer\, performer\, and artist of electroacoustic music and intermedia. Akiko’s research focuses on realizing her ideas of relations between the body and mind into intermedia works\, often in conjunction with building customized instruments/interfaces. It is a form of nonverbal communication with her inner self and with the environment\, including the audience. Expression through sounds and performance brings her therapeutic effects\, helping her process memories and trauma. Her work has been presented internationally at various venues and festivals in the U.S.A.\, Canada\, Chile\, England\, Ireland\, Portugal\, New Zealand\, China\, South Korea\, and Japan. Selected awards include the Best Performance Award at the NIME International Conference\, the winner of the Audio-Visual Composition at the ICMA Showcase: Asia\, the George A. and Eliza Gardner Howard Foundation Fellowship\, and the MacDowell Fellowship. Akiko obtained her B.A. in music from Mills College and M.A. in Experimental Music/Composition at Wesleyan University and completed her Ph.D. in the MEME program at Brown University. Her mentors include Alvin Lucier\, Anthony Braxton\, Ronald Kuivila\, Maggi Payne\, Chris Brown\, John Bischoff\, James Fei\, and Butch Rovan. She is currently an associate professor of Music Technology at the University of Oregon. \n  \nSeongah Shin: Suwol for Tape\nI became drawn to the beauty of Jeju\, South Korea\, a volcanic island known for its strong winds and constantly shifting natural soundscape. As I spent more time there\, I became increasingly aware that the sounds of nature often echo artificial\, human-made sounds. I immersed myself in the sky\, the air\, and the movements—and sounds—of wind\, birds\, and insects. Rather than separating nature and humanity\, I developed my work toward an integrated auditory world\, focusing on new sonic environments created through the blending of field-recorded natural sounds and computer-generated sounds. \nAbout the artist\nComposer Seongah Shin works in the fields of contemporary music\, music for the performing arts\, and electronic music. She earned a Bachelor of Music in composition from Chugye University for the Arts\, a Master of Music in electronic music composition from the Peabody Institute of the Johns Hopkins University\, an MFA in sound design from the University of Missouri–Kansas City\, and a DMA in composition. She has held a sound designer residency with the Missouri Repertory Theatre and an artist residency at EMPAC at RPI. She created the MixMediaImprov. series and presented ten solo creative music concerts. In addition to collaborative projects such as the Thin Line Project\, she co-founded the Asia Computer Music Project(AMCP) and served as director for Asia/Oceania of the International Computer Music Association(ICMA). She is currently a professor of composition at Keimyung University\, Daegu\, South Korea. \n  \nBoyi Bai: Take Me Back to Indonesia\nThis work is rooted in a field recording made in Madobag Village\, Mentawai Islands\, Indonesia\, capturing children playing near an old well. As a sonic memory\, it inspired the composer to reflect on the contrast between fleeting moments of travel serenity and the pressures of everyday life. The work explores the tension between two acoustic worlds. It opens with the calm of the island\, employing gentle drones and textures to construct a dreamlike space between the external environment and internal memory\, reimagining how memories emerge in times of longing. Sharp phone alarms and daily noises then shatter this tranquil soundscape\, marking the collapse of the imagined realm. In the end\, the work maintains an open\, unresolved narrative tension\, oscillating between memory and the present. \nAbout the artist\nBoyi Bai is a composer and sound artist specialising in field recording\, soundscape composition and interactive VR spatial audio\, whose practice-led works transform environmental sound into immersive auditory spaces while exploring the intrinsic relationships between place\, memory and media. His works have been widely presented at internationally acclaimed festivals\, art exhibitions\, and radio programmes\, including BBC Radio 6\, TagTEAMS 2026\, MA/IN Festival\, SOUND/IMAGE Festival\, MANTRA\, PAYSAGES | COMPOSÉS Festival\, and the San Francisco Tape Music Festival\, building an extensive exhibition profile in the global fields of sound art and electroacoustic music. His distinctive artistic approach has been recognised with the Gold Award in the Electronic Acousmatic Music category at the 6th Denny Awards Electronic Music Competition\, a shortlist for the Sound of the Year Awards 2024\, and other internationally recognised professional honours. \n  \nEd Osborn: Ventward\nVentward is built from recordings of several performances using tabletop guitar and electronics which were edited into a single work. It explores a series of sound states to produce a shifting and evolving cluster of sound\, one that gradually expands its tonality and frequency range. As it does so it focuses on distilling the acoustic field down to its core textures of processed and re-processed sounds. The piece also explores a structural space that exists between live improvisation and studio composition. \nAbout the artist\nEd Osborn (1964) works with many forms of electronic media including installation\, video\, sound\, and performance. He has presented his work at the San Francisco Museum of Modern Art\, the singuhr-hörgalerie (Berlin)\, the Berkeley Art Museum\, Artspace (Sydney)\, the Institute of Modern Art (Brisbane)\, the ZKM Center for Art and Media (Karlsruhe)\, Kiasma (Helsinki)\, MassMOCA (North Adams)\, the Yale University Art Gallery\, and the Sonic Arts Research Centre (Belfast). Osborn has received grants from the Guggenheim Foundation\, the Creative Work Fund\, and Arts International and been awarded residencies from the DAAD Artists-in-Berlin Program\, the Banff Centre for the Arts\, Elektronmusikstudion (Stockholm)\, STEIM (Amsterdam)\, and EMPAC (Troy\, NY). He is Professor of Visual Art and Music at Brown University. \n  \nAdrian Kleinlosen: Woody\nSound synthesis and spatialization generated with Csound\, voices with espeak-ng\, mixed in Pro Tools. Text based on a dialogue from a famous movie. \nAbout the artist\nAdrian Kleinlosen is a composer working with instrumental\, vocal\, and electronic music. His work focuses on structure\, rhythm\, and form\, often based on the superposition of independent musical layers and processes rather than linear development. Questions of temporal organization and formal articulation play a central role in both his acoustic and electronic works. In his electronic music\, Kleinlosen composes algorithmically\, using a range of software environments and programming languages. Computational tools are integral to his compositional thinking and are used to design musical structure\, temporal processes\, and formal relationships across different media. Kleinlosen holds degrees in composition and musicology and received a doctorate (Dr. phil.) for research on musical structure and form in contemporary music. In addition to his compositional work\, he has been active as an educator and lecturer in composition\, music theory\, and artistic research. \n  \nYu Linke: Zen to Hearth\nThis piece uses temple bells as the core sampling material\, with the theme of creating an auditory journey from spiritual seclusion to facing reality. “Zen” represents spiritual seclusion\, while “Hearth” represents the mundane hustle and bustle of the world. The original intention is to escape from reality and construct an ideal world. At the beginning\, the clear bell ringing\, accompanied by minimalist electronic tones\, unfolds\, depicting a secluded ideal world of Zen\, where the creator briefly withdraws from the chaos of the mundane world and escapes. As the melody progresses\, the echoes of the bells gradually weaken\, and concrete electronic rhythms and low-frequency textures gradually enter\, symbolizing that the ideal Zen space is gradually penetrated by the reality of the world. The two sound elements interweave in the music to express the mutual integration and non-contradiction of ideals and reality. As the piece approaches its end\, the bells serve as the background\, blending with the rhythmic movement of the realistic clock\, expressing that the chaotic time elements in reality struggle within the atmosphere of the ideal world\, disorder eventually returns to calmness in the temple bells\, highlighting the transformation from “Zen” (spiritual seclusion) to “Hearth” (mundane hustle and bustle) – escape is not the ultimate answer; the reconciliation of ideals and reality is the focus of this auditory narrative. \nAbout the artist\nYu Linke\, born in August 2004\, is currently a third-year undergraduate student majoring in Music Sound Direction in the Composition Department of Wuhan Conservatory of Music. In 2023\, she was admitted to the university with the top score in her major\, focusing on academic practice in composition creation and sound engineering. During her time at school\, her research and practical achievements have covered professional composition competitions and interdisciplinary technology contests. She has successively won the school-level composition award\, the first-class scholarship\, and two second prizes in provincial competitions\, demonstrating solid academic accumulation and outstanding innovative practical ability in the intersection of composition art and sound technology. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/listening-room-2-1/
LOCATION:Hamburg University of Technology\, Building A (A 0.14)\, Am Schwarzenberg-Campus 1\, Hamburg\, 21073\, Germany
CATEGORIES:11-05,Listening Room,Music
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260511T110000
DTEND;TZID=Europe/Amsterdam:20260511T173000
DTSTAMP:20260428T185011
CREATED:20260421T183941Z
LAST-MODIFIED:20260428T102551Z
UID:10000183-1778497200-1778520600@icmc2026.ligeti-zentrum.de
SUMMARY:Listening Room 1
DESCRIPTION:Fixed Media | Program Overview\nPlight of the Monarch\nSalvatore Siriano \nAxis of Frost\nLiuyang Tan \nscanning\nKeisuke Yagisawa \nVeil-Audiovisual performance with real-time motion detection by Media Pipe\nYiting Shao \nEbow Supernova\nCristiano Riccardi \nInterwoven Realms: The Threefold Domain of Consciousness\nQing Ye and Yuxue Zhou \nOkinawa Blue Note\, Recalled\nYerim Han \nQuantum Sphere & Sound Sympathy — Composed for Guzheng and Quantum Computing\nWeijia Yang \nThe Orphic Shimmer onto the 192 Steps\nWanjun Yang \nTranscendence: Performance without Presence\nJinwoong Kim \nTriangulation\nTalia Amar \nWhispers That Are Heard\nJingfan Guo \nLabyrinthe Souriant (Smiling Labyrinth)\nShih-Lin Hung and Ju An Hsieh \nEchoes of the dial\nYunpeng Li \nAbout the pieces & artists\nSalvatore Siriano: Plight of the Monarch\nMonarch butterfly populations face ongoing and compounding threats driven by habitat loss\, pesticide exposure\, invasive plant species\, and continued encroachment on open land where milkweed once thrived. Since the mid-1990s\, eastern migratory monarch numbers have fallen to a fraction of their historical peaks; although recent seasons have shown modest recovery\, populations remain far below long-term averages. \nWithin this context\, the work traces key stages of the monarch lifecycle\, including overwintering in Mexico\, migration\, mating\, and reproduction\, using scientific data from the Monarch Joint Venture and the U.S. Geological Survey translated into sonic parameters through additive and FM synthesis. Long-term population trends shape the evolving texture\, dynamics\, and rhythmic behavior of the sound\, allowing ecological data to inform the temporal and spectral structure of the audio. \nTranslation also operates across media. Original filmed footage from the Fox River Valley in Illinois\, a recurring migratory and breeding landscape for eastern monarch populations\, is transformed through point-cloud and depth-camera processes. Human presence and natural environments are rendered as shifting\, particle-based forms whose fragmentation mirrors the precarity of monarch habitats\, situating ecological data within a perceptual and embodied frame rather than a purely representational one. \nThe work concludes with documentation of a community-based public artwork that distributes milkweed seeds to local residents. While the piece does not involve direct audience interaction\, this closing gesture reframes participation as shared responsibility. Rather than positioning environmental change solely at the level of policy\, the work emphasizes individual and community-scale actions\, such as reducing pesticide use\, planting milkweed and other native species\, and allowing greater biodiversity within managed landscapes\, as tangible responses to ongoing habitat loss. Because eastern North American monarch butterflies lay their eggs exclusively on milkweed\, these localized decisions directly shape their capacity to survive and reproduce. \nAbout the artist\nSalvatore Siriano is a Chicago-based composer\, audiovisual artist\, and educator whose work explores the relationship between sound\, image\, and the natural environment through digital media. His recent works have been presented at Sound/Image Festival (UK)\, SICBM (Brazil)\, Seoul International Computer Music Festival\, Art Alive Festival (Portugal)\, WOCMAT (Taiwan)\, NOIS//E (Italy)\, as well as ICMC\, NYCEMF\, and SEAMUS. He is full-time music faculty at Triton College. \n  \nLiuyang Tan: Axis of Frost\nAxis of Frost is the fourth movement of the electronic music suite Four Seasons Soundscapes. Drawing inspiration from the microscopic dynamics of ice and snow\, the composer employs wind chimes\, gears\, and metallic collisions as primary sound materials. Through the interweaving of pulsating rhythms and howling cadences\,the work evoke a frigid soundscape of crystallizing snowflakes\, swirling ice particles\, and surging glacial undercurrents. \nAbout the artist\nTan Liuyang\, the graduate student of the Music Engineering Department of Sichuan Conservatory of Music\, studies electronic music composition with Professor Lu Minjie. He is the member of EMAC (Electroacoustic Music Association of China). His research focuses on inter-media composition of electroacoustic music\, and his works have won prizes and been selected to present in the international musical activities\, including MUSICACOUSTICA-HANGZHOU\, ICMC (Ireland\, China\, South Korea)\, Earth Day Art Model\,China Computational Art Conference\, the MA/IN Festival in Italy\, International Electronic Music Competition (IEMC\, Shanghai)\,SEAMUS\, and New York City Electroacoustic Music Festival. \n  \nKeisuke Yagisawa: scanning \nThis video work explores the human perception of visual images. In response to art critic Clement Greenberg’s thesis about the immediacy and autonomy of painting\, philosopher Willem Flusser argues that a “scanning” process occurs when perceiving a two-dimensional work of art. This video work takes this thesis as its theme\, expressing the instantaneous phenomenon of a light bulb breaking as visual and acoustic variations. MAX and Processing were used for the video and audio processing. \nAbout the artist\nKeisuke YAGISAWA is an audiovisual artist. He studied electronic music\, video and visual art in Royal Academy of Art in the Hague(Netherlands)\, Tokyo University of the Arts(Japan) and had doctoral degree(DMA) in Kunitachi College of Music in Japan. His works have been presented at international conferences and festivals including ICMC\, NYCEMF\, SICEMF etc. Now he is working at Tamagawa University as an assistant professor for electronic music and technology art. \n  \nYiting Shao: Veil-Audiovisual performance with real-time motion detection by Media Pipe\nThis work employs real-time motion capture of the dancer to generate audiovisual elements in parallel. It is inspired by The Painted Veil by W. Somerset Maugham.\nI. Time and again\, a veil is woven around oneself\, until the original self is forgotten.\nII. The moment the veil is lifted comes only after a long and painful struggle.\nIII. Through repeated loss and searching\, one is left to wonder—beneath the veil\, is this the true self? \nAbout the artist\nYiTing Shao\, born in Hebei\, China\, in 2000\, Received a Bachelor’s degree in Violin Performance from Communications University of Zhejiang in China\, and completed a Master’s degree in Composition at Dankook University in Korea. Currently pursuing a Doctorate in Electro-acoustic and Instrumental Composition at Hanyang University.Work was presented at the 2025 International Computer Music Conference (ICMC) in Boston2025. Performer:XINRAN XU\,Liaoyang\, Liaoning Province\, China Xinran Xu is a dancer and choreographer trained in both street and contemporary dance. She graduated from Beijing Modern Music Academy and Dankook University. She won 1st Place at Hip Hop International (Beijing Regional) and received the Gold Prize in Contemporary Dance at the 6th C-DAK International Dance Competition (2025). She also competed in World of Dance\, Disco Connection\, and Danceholic.She has worked as a choreographer and performer in multiple showcase performances and appeared in the dance program “Ttechum (떼춤)”.Currently\, she is active in Korea as a member of Blue Dance Theater 2\, ISSUE Dance Crew\, and Sparky. Her work focuses on the fusion of street and contemporary dance. \n  \nCristiano Riccardi: Ebow Supernova\nThis audiovisual work proposes a phenomenological investigation of interior space through the sensible representation of a cosmic event: the unfolding of a supernova as both metaphor and device for the alteration of corporeal consciousness. This work proposes an experience of corporeal subtraction\, the progressive dissolution of the body’s boundaries\, the indifferentiation between subject and object. Through sonic rarefaction and luminous beams\, the work induces a meditative state that reconfigures the relationship between spectator and cosmic matter. This is not mere contemplation\, but rather an interpenetration with the intensities that constitute reality itself. The interior journey becomes indistinguishable from the journey through cosmic spaces: both experience the same phenomenon of rarefaction\, illumination\, and the attenuation of boundaries. On a phenomenological plane\, the supernova represents the unveiling of what is hidden—not as a remote event\, but as an intimate revelation of the luminosity that constitutes our own materiality. The listener experiences a form of dilated consciousness\, where the awareness of being part of a force greater than oneself becomes the corporeal experience of one’s own dissolution. The musical and visual rarefaction operates an ascesis from the domain of the speakable and the representable\, leaving pure intensity and openness toward the unsaid—a liminal space where the microcosm of interiority and the macrocosm of stars interpenetrate without boundaries. The composition is structured around twelve independent chromatic lines derived exclusively from samples of an ebowed guitar\, mapped into a custom-built synthesizer that preserves the instrument’s characteristic infinite sustain. Organized into four registral groups (three sopranos\, three altos\, three tenors\, three basses)\, the voices operate as parallel streams converging and diverging through close semitonal proximity\, generating dense harmonic clusters. Staggered entrances and overlapping durations create gradual transformations of harmonic density\, privileging timbral evolution over melodic narrative. The visual component translates each musical line into concentric circles responding in real time to amplitude variations\, creating a dynamic field of overlapping geometric forms that reflect sound-wave propagation and harmonic density. By foregrounding chromatic density\, sustained sonority\, and visual abstraction\, Ebow Supernova proposes an immersive experience in which individual elements dissolve into a unified perceptual field—interrogating the contemporary paradigm of corporeality and suggesting that the deepest contact with reality might paradoxically consist in the negation of the biological body: a journey toward the luminosity that traverses and transcends it. \nAbout the artist\nCristiano Riccardi is a multi-instrumentalist and sound designer with over 30 years of experience in live and studio practice. His recent work spans recording Fausto Razzi’s Memoria (2020) and Lontano (2021)\, performing Razzi’s scenic piece Protocolli (2023)\, arranging Stockhausen’s Tierkreis (2025\, awarded for interpretation)\, and contributing to an intermedial reworking of Stravinsky’s L’Histoire du Soldat. He is currently pursuing a Master’s in Electronic Music at the Conservatorio di Santa Cecilia in Rome\, focusing on electroacoustic composition and real-time performance. \n  \nQing Ye and Yuxue Zhou: Interwoven Realms: The Threefold Domain of Consciousness\n“Overlap: The Three Realms of Consciousness” is a multimedia musical work that explores the deep structures of the human psyche. The sonic dimension includes ASMR trigger sounds—such as wood\, metal\, and human oral noises—woven into an arch-shaped structure (ABCB’A’) that connects Freud’s three dimensions of the preconscious\, the unconscious\, and consciousness. Through TouchDesigner\, sound and visuals jointly construct a psychological landscape\, revealing the interlacing and transformation of multidimensional consciousness within dreams. The audience is drawn into a psychological space that transcends reality\, experiencing the flow and reflection of consciousness through the fusion of sound and form. \nAbout the artists\nQing Ye is a composer and doctoral student in Music Technology at Nanjing University of the Arts\, supervised by Professor Xuan Wang. She is a member of the Electronic Music Society of the Chinese Musicians’ Association and holds a Level-3 composer certification. Her works have been presented at international composition competitions including the Hangzhou International Electronic Music Festival and the Sibelius and Vivaldi International Music Competitions. Her practice focuses on computer-assisted composition and audiovisual creation. \nYuxue Zhou is a Ph.D. in Musicology at the Communication University of China under the supervision of Professor Xuan Wang. Her creative work focuses on electronic and multimedia music. She has received awards at major composition competitions including MUSICACOUSTICA-BEIJING\, the Hangzhou International Electronic Music Festival\, and the Vivaldi International Composition Competition. Her works have been presented in national arts projects and international multimedia music events. \n  \nYerim Han: Okinawa Blue Note\, Recalled\nThis audiovisual fixed media work is based on recollected memories following a trip to Okinawa and a subsequent viewing of the film Okinawa Blue Note. Using sound materials extracted from travel videos\, the piece explores how memory—already shaped and idealized through recollection—is further manipulated and restructured over time. Conceived as a dive into memory\, water functions as a medium that distorts and contains remembrance\, while layered and transformed sounds construct an emotional landscape of mediated recall. \nAbout the artist\nYerim Han (b. 1997\, South Korea) is a composer currently pursuing a Master’s degree in Composition at Hanyang University. Trained in contemporary acoustic music\, she is also actively engaged in MIDI-based composition\, electronic music\, and commercial music practices. Her work explores diverse musical languages across acoustic and digital media. \n  \nWeijia Yang: Quantum Sphere & Sound Sympathy — Composed for Guzheng and Quantum Computing\nThis work takes classic guzheng music as the creative foundation and relies on an independently developed quantum synthesizer interactive system to construct a cross-temporal dialogue between “classical artistic conception” and “quantum timbre”. Its submitted version is an audio-visual hybrid version developed based on the Touch Designer visual effects port\, while the live performance version can be connected to real-time live instrumental performance\, realizing a complete closed-loop performance of “gesture — quantum sound — instrument”. The guzheng melody is processed through quantum gate algorithms to be transformed into electronic sounds with the characteristics of quantum superposition state. Meanwhile\, it relies on a real-time visualization engine to generate dynamic images of quantum Bloch spheres and particle flows\, ultimately constructing an immersive audio-visual integrated experience. Inspired by High Mountains and Flowing Water of the Shandong Guzheng School\, this work inherits its skeletal structure and core backbone notes\, and innovatively reshapes the musical form through quantum timbre\, presenting a transformation path from traditional art to future media art. \nAbout the artist\nWeijia Yang. Ph.D\, Full-time Postdoctor at Shanghai Conservatory of Music. Currently\, he holds multiple academic appointments\, including Excellent Innovation and Entrepreneurship Tutor for Shandong Province’s “Internet Plus” Program\, Member of the Institute of Electrical and Electronics Engineers (IEEE)\, Member of the Chinese Association for Artificial Intelligence (CAAI)\, Member of the Electronic Music Society of the Chinese Musicians Association\, and Reviewer for 8 A-class core journals indexed by SCI/SSCI (such as PLOS ONE and Frontiers in Psychology). He has published 8 core papers indexed by SCI\, SSCI\, EI\, Scopus\, and Peking University Core (PKU Core) of China\, as well as numerous non-core journal papers\, obtained 3 Software Copyrights\, and served as Principal Investigator or Key Participant in 12 research projects at national\, provincial\, and municipal levels. He has mentored 6 national and provincial A-class innovation and entrepreneurship projects that received funding and awards. Additionally\, he has composed over 10 representative electronic music works (e.g.\, Nine-Colored Deer)\, which have been released on major music platforms; his works have won multiple awards and been performed in numerous exhibitions at international competitions both\ndomestically and internationally\, such as ICMC (International Computer Music Conference) and WOCMAT (World Conference for Chinese Composers). \n  \nWanjun Yang: The Orphic Shimmer onto the 192 Steps\n“The Orphic Shimmer onto the 192 Steps” is an interactive live-coding audio-visual performance that explores the role of art as a “harmonizing force” within the turbulent landscape of contemporary civilization. The work takes its title from the 192 steps of the Odessa Staircase\, abstracting this historically and cinematically significant site into a topological space of tension and dispersion. By invoking the myth of Orpheus – the figure who restored order through music – the piece builds a philosophical bridge between classical humanitarian ideals and modern algorithmic logic. \nTechnical Framework \nThe work is built on a sophisticated integration of live coding\, modular synthesis\, and generative visuals:\n* Audio Synthesis: Primary sound design is executed in VCV Rack\, employing a hybrid of subtractive\, wavetable\, and granular synthesis. A foundational layer of algorithmically generated Shepard Tones creates an auditory illusion of “infinite ascent\,” symbolizing the cyclical pain and progress of history.\n* Live Interaction: Sonic Pi serves as the central engine for real-time algorithmic restructuring. The performer uses MIDI controllers to manipulate the density and spatialization of the sound field\, facilitating a dialogue between rigorous code and human intuition.\n* Visual Generative Design: Developed in Processing\, the visual layer utilizes the OSC protocol for sample-level synchronization. Spectral energy and transient parameters from the audio drive fluid\, geometric “shimmers” that map onto the metaphorical 192 steps. \nAbout the artist\nWanjun YANG is an engineer\, programmer\, sound designer\, researcher and electronic music musician. Now he is an associate professor of Music Engineering Department\, Sichuan Conservatory of Music. In the past 26 years\, he lives at Chengdu City\, Sichuan Province\, Southern of China\, and taught at Sichuan Conservatory of Music. His research and creative interests lie in Acoustics and Psychoacoustics\, Sound Design\, Software Developing\, New Media Art\, Multimedia Design. Since 2011\, he attended the EMS Annual in New York\, followed by participation in an electronic music exchange at the University of Oregon in 2012; in 2017\, his work was selected for ICSC 2017 in Nagoya and his paper selected for ICMC 2017 in Shanghai; he served as Concert Reviewer for ICMC 2018 in 2018; in 2019\, his pieces were selected and performed at ICMC 2019 and NYCEMF 2019 in New York\, alongside participation in another electronic music exchange at the University of Oregon and visits to CCRMA at Stanford University and UCLA; in 2020\, his works were selected and performed at the NYCEMF 2020 Virtual Online Festival; from 2021 to 2025\, his compositions were continuously selected and performed at ICMC\, NYCEMF\, and ICSC international conferences; additionally\, he has been a long-term reviewer for ICMC\, IEMC\, and NCDA. \n  \nJinwoong Kim: Transcendence: Performance without Presence\nTranscendence is an audio-visual performance interface that reimagines the relationship between performer interaction and algorithmic autonomy. The system utilizes a gamified “turret-defense” mechanic as a metaphor for stochastic sound generation. The user places “turrets” on a grid\, which autonomously track and engage moving targets based on proximity algorithms. This interaction serves as a direct translation of spatial logic into sound: distance defines intensity\, angle determines stereo panning\, and target properties dictate pitch and timbre\, creating a real-time sonification of digital conflict. \nA core innovation of Transcendence lies in its distinct “Performance Mode.” In traditional Human-Computer Interaction (HCI) for music\, the mouse cursor serves as a constant visual anchor\, reminding the user of the computer’s presence as a tool. In this work\, the cursor is deliberately rendered invisible during performance. While the performer retains control over the grid\, the visual representation of their “hand” is removed. \nThis design choice—”Performance without Presence”—dissolves the barrier between the creator and the creation. It shifts the cognitive load from operating a UI to immersing oneself in the audio-visual feedback loop\, allowing the performer to become a “ghost in the machine.” The result is a self-generating\, yet controllable\, polyphonic soundscape where the interface disappears\, leaving only the pure translation of logic into art. \nAbout the artist\nJinwoong Kim is a South Korean composer\, musician\, and media artist. He received his Ph.D. in Intermedia Arts from Tokyo University of the Arts\, where he studied under Professor Kiyoshi Furukawa. His creative practice spans a wide range of fields\, from contemporary computer music to\ninteractive media installations\, with a focus on integrating compositional methodologies with emerging technologies and cross-disciplinary thought. Drawing upon a diverse background in music\, visual art\, engineering\, and the natural sciences\, he has developed custom software systems–including BODIC and KCAC—to explore new forms of audiovisual expression.\nHe is currently a full-time faculty member in the Digital Media Design major within the Global Elite Division at Yonsei University\, where he teaches courses on creative coding\, computational design\, and media-based artistic practices. \n  \nTalia Amar: Triangulation\n“Triangulation” uses three different electronic music techniques that serve the same goal: to expand the possibilities of the acoustic piano. Each of these three techniques explore a different aspect of human-computer interaction. The pianist controls the electronics from an iPad\, choosing when to switch between the three patches\, and the pianist’s relationship with the computer changes in each patch. In the first patch the computer “listens” to the piano and reacts to it by performing the same notes with modifications such as quarter tone modulations\, reversing\, and stretching. The electronics in the second patch is pre-recorded and multiplies the piano\, with the effect that it sounds as if there were many pianos performing the same time. In the third patch the electronics records the piano performance and plays it back with different effects\, building up an aleatoric wall of pianos that is not possible to perform acoustically. \nAbout the artist\nDr. Talia Amar is the recipient of many international  awards including the Prime Minister prestigious award 2018\, The Acum prize for “best piece of the year” 2022\, The Acum award 2019\, the Rosenblum Prize for Promising Young Artist 2016 by the Tel Aviv Municipality\, the Klon Award for young composers granted by the Israeli Composers League. Recently she was the winner of The Next Voice – a call for scores from Israeli composers. Her piece\, For Orchestra I was unanimously selected from an incredible 152 submissions and will be performed by the Israel Philharmonic under the baton of Lahav Shani in March 2026 in Tel Aviv\, Haifa\, and Jerusalem. She was selected by the famous violinist Renaud Capucon to participate in the Festival New Horizons d’Aix en Provence 2022 where her piece\, commissioned especially for the festival\, will be performed. In 2022 her piece “Labyrinth” was commissioned and performed at Festival Présences by Radio France in Paris. She was selected to represent Israel in different festivals such as ISCM World New Music in Vancouver\, ECCO Festival in Brussels\, Asian Composers League Festival in Taiwan\, ICMC in Seoul\, and SMC in Austria.\nSince 2017\, Talia joined the composition faculty at the Jerusalem Academy of Music and Dance in Israel where she is also the Head of Technology and Innovation. She is also a council member of the Israeli Composers League and the performer of electronics music of Meitar Ensemble. \n  \nJingfan Guo: Whispers That Are Heard\nComposed for Arduino and Max/MSP\, this work employs a multi-sensor interface as its primary vehicle. It centers on two core sonic materials: whispering voices and African percussion. The former signifies the individual and the secret\, while the latter points to the collective and driving force. The work aims to superimpose these elements within a single sound field\, erasing the boundary between the individual and the collective. Amidst sonic entanglement and compression\, intimate whispers are deprived of their original space of existence\, alienated into mere components of the rhythm. This is\, at once\, an act of listening to secrets and a scrutiny of the clamor. \nAbout the artist\nJingfan Guo\, a native of Tai’an\, Shandong Province\, China\, is a member of the Electronic Music Society of the Chinese Musicians Association(EMAC) and a postgraduate student in Computer Composition at Wuhan Conservatory of Music\, class of 2024\, under the guidance of Professor Li Pengyun. His main research interests include electroacoustic music\, sensor interaction\, and kyma sound design. His major works include “Mute Water” (Electroacoustic Music)\, “Liminal Space” (Mixed Music)\, “Dissoving Voice” (for kyma and Computer)\,“Whispers That Are Heard”(for Arduino and Sensors). \n  \nShih-Lin Hung and Ju An Hsieh: Labyrinthe Souriant (Smiling Labyrinth)\n“Labyrinthe Souriant” (Smiling Labyrinth) is an interdisciplinary electroacoustic work exploring the fluid boundary between visual art and sonic translation. The piece is based on a hand-drawn graphic score created by a visual artist\, who utilizes traditional staff paper as a canvas for organic\, labyrinthine line-work and anthropomorphic silhouettes. The composition utilizes a performance-led approach to sound design. Using the graphic score as a primary visual stimulus\, the composer engaged in a one-take improvisation session via MIDI controllers mapped to a customized Ableton Live environment. This method ensures that the temporal flow of the music maintains a direct\, visceral connection to the visual trajectories of the score. The vocal samples were processed through real-time DSP chains\, where the nuances of the performance (velocity\, pressure\, and timing) were\ntranslated into dynamic spectral shifts and spatial movement\, reflecting the ‘Smiling Labyrinth’s’ intricate and unpredictable nature. \nAbout the artists\nShih-Lin Hung holds a B.A. from the National University of Tainan and an M.A. from National Yang Ming Chiao Tung University. Initially trained in Western classical composition\, his recent work explores electroacoustic aesthetics within the lineage of French musique concrète. His creative practice focuses on uncovering alternative sonic possibilities in daily sounds that are often ignored or taken for granted. \nJu-An Hsieh graduated from the Gerrit Rietveld Academie in Amsterdam\, the Netherlands\, and works primarily with images. In 2024\, their exhibition The Theatre explored the impact of colonial regimes on Taiwan’s ecology and the power relations between humans and nature. As their practice evolves\, embodied memories\, sensory experiences\, and dreams connected to nature have gradually become central themes in their work. \n  \nYunpeng Li: Echoes of the dial\nThis work uses the “outdated” communication technology signal—the telephone dial tone—as its core material. Through sampling and sound processing of the DTMF during telephone dialing\, it explores the dialectical relationship between auditory memory and the disappearance of matter within the context of technological accelerationism. In today’s world where information transmission approaches zero latency\, how can those echoes that once carried the desire for communication construct a new aesthetic dimension amidst the abandoned ruins? \nAbout the artist\nYunpeng Li\, Ph.D. Associate Professor\, Master’s Supervisor & Director\, Art & Science Teaching and Research Section\, Wuhan Conservatory of Music Main research and teaching focus: Electronic music composition. His works have been selected for the International Computer Music Conference (ICMC) multiple times. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/listening-room-1-1/
LOCATION:Hamburg University of Technology\, Building A (A 0.18)\, Am Schwarzenberg-Campus 1\, Hamburg\, 21073\, Germany
CATEGORIES:11-05,Listening Room,Music
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260511T110000
DTEND;TZID=Europe/Amsterdam:20260511T180000
DTSTAMP:20260428T185011
CREATED:20260421T093948Z
LAST-MODIFIED:20260421T095126Z
UID:10000142-1778497200-1778522400@icmc2026.ligeti-zentrum.de
SUMMARY:Installation | Miles Friday: "Breathwork"
DESCRIPTION:Breathwork is a twelve-channel sound installation where loudspeakers become breathing bodies. Each loudspeaker is encased in an inflatable bag that swells and contracts in response to low-frequency drones\, forming a slow\, ever-shifting breath-like choreography. \nWithin this field of motion\, clouds of layered just intonation partials drift in and out of perception\, while low frequencies create a base of acoustic beating and Shepard tone-esque glissandos. By transforming the loudspeaker into a pneumatic pump\, Breathwork reimagines the loudspeaker as a tool for visual synthesis\, where vibrations in the air animate inflatables as kinetic sculptures—synthetic lungs whose movement create polyrhythms that can be both seen and heard. \nAll audio is generated live via SuperCollider and is running on two Bela Mini Multichannel Expanders. \nAbout the artist\nMiles Jefferson Friday is an artist who focuses on sound as his primary medium. Building new instruments\, composing music\, designing sound sculptures\, and creating immersive installations\, his practice invites us to reconsider how we hear and listen. Miles is currently an Assistant Professor of Digital Music at University of Texas at San Antonio\, holds a DMA and MFA from Cornell University\, and an MA from the Eastman School of Music. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/installation-miles-friday-breathwork-1-2/
LOCATION:Hamburg University of Technology\, Building A (Foyer)\, Am Schwarzenberg-Campus 1\, Hamburg\, 21073\, Germany
CATEGORIES:11-05,Installation
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260511T110000
DTEND;TZID=Europe/Amsterdam:20260511T180000
DTSTAMP:20260428T185011
CREATED:20260421T095644Z
LAST-MODIFIED:20260421T095644Z
UID:10000144-1778497200-1778522400@icmc2026.ligeti-zentrum.de
SUMMARY:Installation | Alessandro Anatrini: "Faulty Oracle"
DESCRIPTION:Faulty Oracle is an adaptive audiovisual installation that conjures a gloriously unreliable divinatory machine. Visitors pose questions through body language: gestures\, movements\, postures which the system interprets\, misreads\, and willfully transforms. In return\, the oracle delivers cryptic animated answers\, flickering between epiphany\, nonsense\, and hallucination. Voices stretch\, fracture\, and echo over visuals that shimmer with unstable symbols\, offering responses that feel both prophetic and utterly broken.\nThe dialogue is a masterclass in miscommunication: questions are misinterpreted\, wrong ones are amplified\, and answers rarely align with intent. The oracle becomes a mirror of ambiguity\, where meaning emerges from error\, chance\, and interpretation rather than clarity.\nBy shifting interaction from language to the body\, Faulty Oracle gleefully dismantles any expectation of precision in human-machine exchange. It invites participants into a space of playful fallibility\, reframing prophecy as a dance of uncertainty and imagination. \nAbout the artist\nAlessandro Anatrini (1983) is a composer\, new media artist\, and developer with a background in musicology\, composition\, and electronic music. Completed a M.A. in multimedia composition at HfMT Hamburg and a PhD in artistic research focused on machine learning in adaptive multimedia environments. His work has\nbeen presented by Ensemble Intercontemporain\, Klangforum Wien\, Symphoniker Hamburg and at festivals including Manifeste\, HCMF\, Impuls\, and Blurred Edges. Frequently invited to speak at conferences such as SMC\, TENOR\, and AIMC. Collaborates with institutions like UdK Berlin and the Digital Stage Foundation. Lecturer on machine learning topics at HfMT since 2018\, from 2024 he is Professor of Multimedia at the Conservatorio of Piacenza (Italy). \n 
URL:http://icmc2026.ligeti-zentrum.de/event/installation-alessandro-anatrini-faulty-oracle-1/
LOCATION:Hamburg University of Technology\, Building A\, Videospace I\, Am Schwarzenberg-Campus 1\, Hamburg\, 21073\, Germany
CATEGORIES:11-05,Installation
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260511T110000
DTEND;TZID=Europe/Amsterdam:20260511T180000
DTSTAMP:20260428T185011
CREATED:20260421T100042Z
LAST-MODIFIED:20260423T171602Z
UID:10000153-1778497200-1778522400@icmc2026.ligeti-zentrum.de
SUMMARY:Installation | Dahye Seo: "Unscored"
DESCRIPTION:A camera installed on a balcony captures the live sky\, converting it into generative sound in real time. The trajectories of birds crossing the frame are translated into piano tones\, forming unpredictable melodies. The time spent watching the sky—waiting for the next sound—becomes part of the work. \nAbout the artist\nDahye Seo (b. 1985\, South Korea) is a multimedia artist based in Berlin. She explores the movement of living organisms and environmental phenomena through sound\, data\, and interactive installations\, creating immersive experiences that bridge perception and natural patterns. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/installation-dahye-seo-unscored-1/
LOCATION:Hamburg University of Technology\, Building A\, Videospace II\, Am Schwarzenberg-Campus 1\, Hamburg\, 21073\, Germany
CATEGORIES:11-05,Installation
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260511T110000
DTEND;TZID=Europe/Amsterdam:20260511T180000
DTSTAMP:20260428T185011
CREATED:20260421T191718Z
LAST-MODIFIED:20260427T105322Z
UID:10000193-1778497200-1778522400@icmc2026.ligeti-zentrum.de
SUMMARY:Installation | Bill Parod & Teresa Parod: "The Elephants of Trianon"
DESCRIPTION:The Elephants of Trianon is an augmented-reality audiovisual installation that extends a series of public murals into an interactive spatial sound environment. The original work consists of ten adjacent murals painted on garage doors in a public alley in Evanston\, Illinois\, USA. These form part of a larger international body of public work by the artist\, Teresa Parod. For the International Computer Music Conference\, the project is presented as a free-standing installation at TU Hamburg-Harburg using large construction-fence banners which approach the full-size of the garage door murals. \nUsing a custom mobile app\, visitors’ devices recognize each mural and anchor a corresponding three-dimensional audiovisual scene in space. As visitors move through the installation and activate additional murals\, their scenes accumulate and blend\, creating a continuously evolving environment\, rather than a sequence of isolated works. The installation therefore functions as a spatial composition shaped by listener movement\, attention\, and duration of engagement. \nThe soundscape combines field recordings made in Bali\, New Orleans\, and Chicago with instrumental layers and voices in ten languages. Animated three-dimensional forms—birds\, bats\, dogs\, elephants\, rabbits\, and celestial figures—appear among the murals\, along with subtle video textures and custom shaders that bring painted elements into motion. Some virtual elements are not confined to a single mural but move throughout the installation space\, responding to the physical layout and dimensions of the exhibition environment. \nThe project suggests a scalable model for mobile\, spatially responsive sound installations in galleries and public spaces. The software framework and mobile application used in The Elephants of Trianon have been developed through prior public installations and gallery presentations and are designed to function across a range of exhibition formats\, from outdoor murals to indoor projection and free-standing display structures. The ICMC installation demonstrates how augmented reality can be used not only as a visual medium\, but as a platform for spatial audio composition and listener-driven musical form. \nAbout the artists\nBill Parod (b. 1954\, Chicago USA) is a composer\, improviser (violin)\, and software developer who works on interactive spatial music\, audio poetry\, image reactive augmented reality\, and living music mobile apps. His work has appeared in Chicago at Elastic Arts\, Experimental Sound Studio\, and the Jay Pritzker Pavilion; Burning Man\, Nevada USA; New York University NYC\, and Ircam in Paris\, France. \nTeresa Parod (b. 1957\, Alton IL\, USA) paints vibrant\, luminous oil paintings and murals\, celebrating life through dichotomies such as light and shadow\, warm and cool and complementary colors. Her landscapes invoke mythological destinations inviting the viewer to journey there.\nShe has created over one hundred works of public art in the United States\, Cuba\, Bali\, Nepal\, and Istanbul. In Cuba\, she was honored to work with mosaicist José Fuster\, whose work inspired her creation of art in unexpected and underused spaces.\nShe lives in Evanston\, Il with her husband\, Bill Parod. Together they have collaborated on several exhibitions and performances and multichannel visual and musical art.\nShe also teaches art history at Oakton college\, does an annual century bike ride and studies and performs classic Indonesian dance. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/installation-bill-parod-teresa-parod-the-elephants-of-trianon-1/
LOCATION:Hamburg University of Technology\, Outdoor Area II\, Am Schwarzenberg-Campus 1\, Hamburg\, 21073\, Germany
CATEGORIES:11-05,Installation
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260511T110000
DTEND;TZID=Europe/Amsterdam:20260511T180000
DTSTAMP:20260428T185011
CREATED:20260421T192918Z
LAST-MODIFIED:20260422T141802Z
UID:10000197-1778497200-1778522400@icmc2026.ligeti-zentrum.de
SUMMARY:Installation | Finlay Graham: an egg with fouled neurons
DESCRIPTION:an egg with fouled neurons is a variable duration installative performance on a synthesizer fully coded by the composer in MaxMSP which utilizes a post-tonal framework to transform large harmonic sets\, preserving the fidelity of harmonic intervals while transforming harmonic identity\, allowing for movement through a complex harmonic pattern. Through a 1-4 hour long performance\, this unbound harmony is explored within a spatialized environment. “The egg lacks organs and cellular structure\, but it could be alive. When vibrated\, it would notice the each simplified frequency. If you apply equal pressure to all sides\, it doesn’t break\, but moments of concentration are dangerous. If permeated and submerged\, it’s unclear where the egg begins\, and what is inside.\nThis work is structurally built around the frequency at 440 Hz (A4)\, but temporally it moves moves through 8 sections: \n1. the breath\n2. Subconscious initiation\n3. Embodiment/mirroring\n4. silence\n5. onset\n6. liberation\n7. oneness and death\n8. contraction \nAbout the artist\nFinlay Graham (b. 2005\, he/him)\, is an American composer and educator based in Asheville\, North Carolina and Oberlin\, Ohio whose work is inspired by nature\, spirituality\, emotion\, and intimacy.\nGraham is currently enrolled at Oberlin College and Conservatory studying Music Composition and Neuroscience with a minors in Music and Cognition and TIMARA (Technology in Music and Related Arts). He currently studies composition under Jesse Jones. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/installation-finlay-graham-an-egg-with-fouled-neurons/
LOCATION:Hamburg University of Technology\, Building N (Foyer)\, Eißendorfer Straße 40\, Hamburg\, 21073\, Germany
CATEGORIES:11-05,Installation
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260511T110000
DTEND;TZID=Europe/Amsterdam:20260511T180000
DTSTAMP:20260428T185011
CREATED:20260423T170857Z
LAST-MODIFIED:20260423T171341Z
UID:10000230-1778497200-1778522400@icmc2026.ligeti-zentrum.de
SUMMARY:Installation | Zhao Jiajing: "Omniopticon"
DESCRIPTION:Omniopticon invites visitors to step inside a constantly shifting field of sound. Scattered throughout the space are wireless loudspeakers. They are not fixed in place: you are free to pick them up\, move them\, tilt them\, rotate them\, or leave them somewhere new. Every gesture reshapes the acoustic environment\, allowing the installation to unfold differently with each visitor’s presence.\nThe work takes its name from the idea of the omniopticon: a condition in which everyone can observe and be observed\, a feature of today’s social-media-saturated world. Rather than presenting this as a system of surveillance\, Omniopticon turns it into a shared\, exploratory environment. Visibility becomes audibility\, and moving a loudspeaker becomes a way\nof revealing or obscuring sonic perspectives.\nAs the speakers change position\, the sound re-forms across the room. What you hear is shaped not only by the architecture\, but also by the placement of the speakers and by the choices of those around you. No two moments are alike. The installation becomes a collective instrument whose behaviour reflects the actions and curiosity of its participants.\nYou are invited to explore. Try moving a single speaker or coordinating with others. Follow a sound across the room\, or gather several speakers into a cluster. Listen to how the sonic space expands\, fragments or gathers as you intervene. Notice how your movements influence – and are influenced by – other people in the space.\nIn Omniopticon\, space is not a backdrop but the material of the artwork itself: a social\, physical and acoustic terrain that shifts with every action. It is both an immersive environment and a gentle social experiment\, prompting reflection on how we navigate shared spaces\, how we shape them\, and how they in turn shape us. Your participation completes the piece. \nAbout the artist\nZhao Jiajing (赵嘉旌; family name–given name) is a London-based electroacoustic composer and sound artist from Beijing. \nZhao’s practice spans acousmatic music\, sound installation\, performance\, and new media. Since 2019\, he has focused on spatial sound\, creating multichannel compositions and installations. His work explores questions of time\, technological mediation\, and our evolving relationship with both the digital and natural worlds. He frequently collaborates across disciplines\, working with practitioners and researchers in visual art\, theatre\, science\, and technology. \nZhao’s work has been featured at major international venues and festivals such as Ars Electronica (AT)\, IRCAM (FR)\, ZKM Karlsruhe (DE)\, ICMC (Int’l)\, SICMF (KR)\, GMEM (FR) and ORF musikprotokoll (AT).  He has received recognitions and commissions from the ISCM British Section\, Musica Nova\, Aesthetica × Audible\, the Shanghai International Arts Festival\, The Engine Room\, Musicacoustica\, Royal College of Art × LG Display\, IEM Graz\, among others. \nZhao holds an MA in Information Experience Design from the Royal College of Art and is currently pursuing a PhD at the University of the Arts London (CRiSAP)\, supervised by Adam Stanović. He is also a mentor and visiting lecturer for the MA in Designing Audio Experiences at University College London. \n  \n***\nOminiopticon uses the Snappi speakers system: a low cost wireless multichannel system developted by Marcus Weseloh and Jacob Sello at the ligeti center‘s Innolab.
URL:http://icmc2026.ligeti-zentrum.de/event/installation-zhao-jiajing-omniopticon-1/
LOCATION:Hamburg University of Technology\, Building A\, Videospace III (A 3.35.1)\, Am Schwarzenbergcampus 1\, Hamburg\, 21073\, Germany
CATEGORIES:11-05,Installation
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260511T120000
DTEND;TZID=Europe/Amsterdam:20260511T200000
DTSTAMP:20260428T185011
CREATED:20260421T092055Z
LAST-MODIFIED:20260421T092055Z
UID:10000133-1778500800-1778529600@icmc2026.ligeti-zentrum.de
SUMMARY:Installation | Windisch\, Peng & v. Coler: "MESH"
DESCRIPTION:MËSH is an immersive\, networked music and media system that blends interactive installation with live performance. Developed since 2019\, MËSH uses a distributed array of interactive nodes to create a responsive audiovisual environment. Depending on the venue\, installations range from 4 to 16 interconnected nodes communicating over a wireless local network. \nEach node processes real-time movement captured by its camera using custom computer-vision software. These motion signals drive local sound generation in SuperCollider and trigger sample playback drawn from a curated library of field recordings and media fragments. Sounds are spatialized across the network\, forming a shared\, evolving soundscape shaped directly by audience interaction. \nMËSH also functions as a performance instrument: synchronized graphical scores are displayed across all nodes\, enabling musicians to perform within the same reactive ecosystem. This latest iteration continues MËSH’s exploration of distributed creativity and collaborative sensing. \nAbout the artist\nHenry Windish is a graduate student at Georgia Tech. His work focuses on computer music systems\, audio software development\, and collaborative tools for performance and education. He contributes to the design and implementation of networked performance platforms and supports projects involving SuperCollider\, audio networking\, computer vision\, and interactive media. Previously\, he studied electrical engineering at Washington University in St. Louis.\nTristan Peng is a PhD student at Georgia Tech exploring interaction design\, spatial audio\, and sonification; previously studying at CCRMA at Stanford University. His work aims to create accessible\, artful\, and interactive ways for people to experience sound. His current projects investigate how data can become a medium for participation and how immersive audio spaces can evoke emotion and understanding in ways that traditional visualizations cannot.\nHenrik von Coler is a musician and researcher. In 2024 he founded the Lab for Interaction and Immersion at Georgia Tech. Before that he was the director of the Electronic Music Studio at TU Berlin. Henrik’s research explores interface design\, algorithms for sound generation and experimental concepts for composition and performance. In 2017 he founded the Electronic Orchestra Charlottenburg to explore music interaction on immersive loudspeaker systems. He has since worked on ways to enhance how musicians and audiences experience spatial music and sound art. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/installation-windisch-peng-v-coler-mesh/
LOCATION:Stellwerk Hamburg\, Hannoversche Straße 85\, Hamburg\, 21079\, Germany
CATEGORIES:11-05,Installation
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260511T120000
DTEND;TZID=Europe/Amsterdam:20260511T200000
DTSTAMP:20260428T185011
CREATED:20260421T093204Z
LAST-MODIFIED:20260422T144350Z
UID:10000135-1778500800-1778529600@icmc2026.ligeti-zentrum.de
SUMMARY:Installation | Adriano C. Monteiro & Rafaela B. Pires: "DE/RE:GENERATION"
DESCRIPTION:De/Re:Generation stems from a speculative question: would cicadas sense acoustic information during the up to 17 years they live underground\, before emerging from the soil for a brief adult phase marked by intense acoustic display? From this perspective\, the installation approaches sound not only as an auditory phenomenon\, but as something sensed through the body\, making vibration and tactile perception central to the experience.\nAt the core of the work are rounded\, shell-like sculptures molded from biodegradable cassava-starch bioplastics. These forms visually echo cicada nymphs and exuviae: fragile\, hollow exoskeletons that signal absence\, transfor-mation\, and continuation. Like the remnants left after metamorphosis that nourish other species\, the installation’s ma-terials participate in an ongoing process of regeneration: they deform over time\, respond to humidity and dryness\, and become alternately more rigid or more flexible\, like a living skin in dialogue with the environment. Integrated as touch interfaces\, the bioplastic sculptures function as tactile sensing surfaces that mediate the interaction with the sound en-vironment formed by vibrating surfaces and low-frequency sound fields\, that allude to the cicada’s aboveground and underground sonic worlds\, blurring boundaries between tactile and auditory modes of perception\, organic material and inorganic technological systems. \nAbout the artists\nAdriano Monteiro is a music composer and researcher. His work focus on the convergence of art\, science and technology for creative processes\, performance and analysis of music. He is the author of eletroacustic and intermedia works in different media and formats\, such as acousmatic\, live electronics\, audiovisual performances and installations\, network and telematic music\, and also author and coauthor of several articles concerning creative processes in music and musical analysis. Adriano Monteiro is an associate professor of Music Composition at the School of Music and Scenic Arts of Federal University of Goiás (EMAC/UFG). He studied music composition at the University of Campinas (UNICAMP) and holds a PhD in music from the same institution. \nRafaela Blanch Pires is a designer and professor at the Scenic Arts department at the Federal University of Goiás (Brazil). Her background is in fashion design\, MA in “Fashion and Textiles” and PhD in “Design and Architecture” (São Paulo University). Between 2015 and 2016 she worked as a doctoral visiting student at the “Wearable Senses Lab” at the Technical University of Eindhoven (Holland). She experiments with the areas of bio-materials\, digital fabrication\, special effects make-up\, costume design and electronics. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/installation-adriano-c-monteiro-rafaela-b-pires-de-regeneration-1/
LOCATION:Stellwerk Hamburg (Lounge)\, Hannoversche Str. 85\, Hamburg\, 21079\, Germany
CATEGORIES:11-05,Installation
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260511T130000
DTEND;TZID=Europe/Amsterdam:20260511T153000
DTSTAMP:20260428T185011
CREATED:20260415T101559Z
LAST-MODIFIED:20260420T115033Z
UID:10000112-1778504400-1778513400@icmc2026.ligeti-zentrum.de
SUMMARY:[Off-ICMC] Workshop: Code the Beat – Learn to Code through Music
DESCRIPTION:Coding through Music | Photo: ligeti center\n  \nIn this workshop\, you will compose your own music using the program Sonic Pi. Are you passionate about music? Then you’ll discover a new form of expression while learning to code. You already know a little bit about coding? Then you’ll get to know music-making from a different perspective. By the end of the workshop\, you’ll have composed a song and learned something new about coding.  \nFor children and teenagers from age 10 to 14.\nRegistration required here   \n  \nThe Off-ICMC\nMusic is what brings us together\, even when everything else pulls us apart.\nMusic everywhere—it is part of our everyday lives. And yet\, we’re hearing it performed live on analog instruments less and less. Instead\, it often reaches us through speakers or headphones\, as files\, from the cloud. What does music mean to you? What does it sound like today? Where does it begin—and where does it end?\nThe ligeti center invites you to listen more closely and discover new sounds—to explore\, experiment\, and play. This year\, ICMC HAMBURG 2026 revives an old tradition: the Off-ICMC\, a free and accompanying festival curated for the general public and anyone curious about computer music. \nAll Off-ICMC events are free of charge.  \n\n  \n \n 
URL:http://icmc2026.ligeti-zentrum.de/event/off-icmc-workshop-code-the-beat-learn-to-code-through-music/
LOCATION:Hamburg University of Technology\, Building E (E 0.02)\, Am Schwarzenberg-Campus 3\, Hamburg\, 21073\, Germany
CATEGORIES:11-05,Off-ICMC
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260511T133000
DTEND;TZID=Europe/Amsterdam:20260511T150000
DTSTAMP:20260428T185011
CREATED:20260421T084731Z
LAST-MODIFIED:20260427T133301Z
UID:10000077-1778506200-1778511600@icmc2026.ligeti-zentrum.de
SUMMARY:Lunch Concert 1A
DESCRIPTION:After the Opening Concert of ICMC HAMBURG 2026\, the regular music program begins today. This first Lunch Concert offers an insight into the current international computer music scene. What makes this event special is the personal presence of the artists: the composers are either on stage themselves or have brought the musicians they wrote for with them to Hamburg.\nIt is a program of short distances between idea and sound. The works demonstrate how diverse collaboration between humans and technology can be today—from the classical solo clarinet to interactive formats. \n  \nProgram Overview\nTyche\nSever Tipei \nAIKYAM\nClaudia Robles Angel \nHOTPO\nMichael Edwards \nTessellae\nRodrigo Cadiz and Thierry Miroglio \nThe Center of the Universe\nSunhuimei Xia and Sunhuimei Xia \n  \nAbout the pieces & artists\nSever Tipei: Tyche \nTyche for Bb clarinet and fixed media is a composition generated with original software for Computer-assisted (algorithmic) Composition and sound design developed by the composer and his collaborators.\nDivided into four main sections of 2-3-1-2 minutes\, the work utilizes stochastic distributions\, Markov chains\, sieves and Just Intonation as well as detailed control of spectra\, FM transients\, spatialization and reverberation. A basic framework of precise proportions and deterministic procedures are complemented by random details governed by Tyche\, the goddess of fortune\, chance\, providence and fate. \nAbout the artist\nA composer and a pianist\, Sever Tipei was born in Bucharest\, Romania\, and immigrated in the United States in 1972. He holds degrees in composition from the University of Michigan (DMA) and piano performance from Bucharest Conservatory (Diploma). Tipei taught at Chicago Musical College of Roosevelt University and\, between 1978 and 2021\, at the University of Illinois at Urbana-Champaign School of Music. After retirement Tipei continues to teach in the School of Information Sciences where he also directs the “James W. Beauchamp Computer Music Project”. He is also a National Center for Supercomputing Applications Faculty Affiliate. Between 1993 and 2003 Tipei was a Visiting Scientist at Argonne National Laboratory where he worked on the sonification of complex scientific data.\nMost of his compositions were produced with software he designed: MP1 – a computer-assisted composition program first used in 1973\, DIASS – for sound synthesis and M4CAVE – software for the visualization of music in an immersive virtual environment. More recently\, Tipei and his collaborators have developed DISSCO\, software that unifies computer-assisted (algorithmic) composition and (additive) sound synthesis into a seamless process. His compositions have been performed in the US\, Australia\, Brazil\, France\, Germany\, Italy\, Portugal\, Romania\, Spain\, United Kingdom and Taiwan. \n  \nClaudia Robles Angel: AIKYAM \nAIKYAM is a real-time surround sound work for 1 performer and 5 to 6 participants (audience) inspired by Kuramoto’s mathematical model of the spontaneous order or synchronisation system in nature\, e.g. fireflies\, heart rates or humans clapping their hands together. The term AIKYAM is based on the Sanskrit word: ऐक्यम\, and it means unity or harmony. \nAbout the artist\nBorn in Bogotá (Colombia)\, living in Cologne (Germany). Composer\, sound and new media artist\, her work covers different aspects of visual and sound art\, extending from acousmatic and audio-visual compositions to interactive performances/installations using biomedical signals and AI (Artificial Intelligence).\nShe has been Artist-in-residence in several outstanding institutions around the globe. In 2022 was awarded with an honorary mention by the GIGA Hertz award at ZKM Center.\nHer work has been performed and exhibited worldwide e.g. at ZKM\, ISEA; KIBLA Centre Maribor\, CAMP Festival – 55 Venice Biennale Salon Suisse\, ICMC; New York City Electroacoustic Music Festival; NIME; STEIM; Harvestworks Digital Arts Center NYC\, Heroines of Sound Berlin; Audio Art Festival Cracow; MADATAC Madrid; Athens Digital Art Festival ADAF\, CMMAS Morelia; Beast FEaST Birmingham; ICST ZHdK Zurich; RE:SOUND Aalborg; Electric Spring Festival Huddersfield; AI Biennal Essen; at the Centre for International Light Art Unna and more recently at Acht Brücken Festival Cologne and at the Philharmonie Essen. \nwww.claudearobles.de \n  \nMichael Edwards: HOTPO \nHinting at something a little more coarse\, the title HOTPO is in fact a completely innocent reference to the Collatz Conjecture. This mathematical proposition\, also known by other names\, refers to a succession of numbers called the hailstone sequence (or wondrous numbers)\, because their values usually ascend and descend like hailstones in a cloud.\nThough the mathematical proof of the conjecture is complex\, the proposition is very simple: Take any positive whole number; if it is even\, divide it by two; if it is odd\, multiply it by three and add one (hence the acronym Half Or Three Plus One: HOTPO); repeat the process with the result and you will find that no matter which number begins the process\, you will always\, given enough iterations\, reach one.\nThe algorithm is easy to programme and experiment with plus it produces rather nice images when given different starting numbers and plotted over various iterations. I used the algorithm in this piece to generate section lengths and repeated structures from nine basic rhythm sequences\, hence my sequence was 9 28 14 7 22 11 34 17 52 26 13 40 20 10 5 16 8 4 2 1. The piece alternates sections opposing mixed materials (odd section numbers) with obsessively repeated material (even). The numbers are also used for the generation of the sound files triggered during the performance. Despite the rather abstract nature of the generative procedure\, the results of the algorithms were developed intuitively and the piece as a whole arises out of and proceeds through a maelstrom of events fitting to the imagery of a hailstorm.\nHOTPO was commissioned by Henrique Portovedo for the World Saxophone Congress 2018 in Zagreb. That version included an ensemble. In 2020 I reworked the sound files to include MIDI data from the ensemble and made a solo + computer version. This was revised in 2024. \nAbout the artist\nI’m a composer\, improvisor\, software developer\, and since 2017 Professor of Electronic Composition at ICEM\, Folkwang University of the Arts\, Essen\, Germany.\nI’m the programmer of the slippery chicken algorithmic composition package. My compositional interests lie mainly in the development of structures for hybrid electro-instrumental pieces through the integration of algorithmically produced scored materials with similarly generated computer-processed sound. I also improvise on laptop\, saxophones\, and MIDI wind controller\, performing for instance at the 2008 Montreaux Jazz Festival.\nI studied composition at Bristol University with Adrian Beaumont (BA\, MMus) and privately with Gwyn Pritchard. In 1991 I moved to the US for further studies in computer music with John Chowning at CCRMA\, Stanford University (MA\, Doctor of Musical Arts). Whilst studying there I also worked at IRCAM\, Paris\, with a residence grant at Cité des Arts.\nDuring 1996-7 I was a consultant software engineer in Silicon Valley. I developed a Document Recognition System used in several US hospitals. In 1997 I was appointed Lecturer in Music Theory at Stanford but later that year moved to Salzburg\, Austria. I was Guest Professor at the Universität Mozarteum until I left to teach at the University of Edinburgh in 2002. \n  \nRodrigo Cadiz: Tessellae \nTessellae for percussion and live electronics unfolds as a mosaic of small rhythmic tiles laid in time by a single performer. The percussion writing is built on Euclidean rhythmic principles\, patterns that distribute events as evenly as possible\, expanded through asymmetric tuplets (notably groups of three and five)\, repetitions\, and carefully placed silences that create a strong sense of anticipation from phrase to phrase. Only one or two instrumental lines sound at a time\, allowing the listener to perceive each gesture as a discrete tessera within a larger rhythmic surface. The live electronics\, built on RAVE\, a real-time variational autoencoder developed at IRCAM and trained on a corpus of percussion sounds\, listen to the performer and respond by reshaping timbre and resonance in the moment\, extending and refracting the acoustic material without fixing it in advance. The result is a dialogue between strict rhythmic architecture and fluid sonic transformation\, where expectation\, delay\, and renewal are central expressive forces. Tessellae was composed for Thierry Miroglio. \nAbout the artists\nRodrigo F. Cádiz is a composer\, researcher and engineer. He studied composition and electrical engineering at the Pontificia Universidad Católica de Chile (UC) in Santiago and he obtained his Ph.D. in Music Technology from Northwestern University. His compositions\, consisting of approximately 70 works\, have been presented at several venues and festivals around the world. His catalogue considers works for solo instruments\, chamber music\, symphonic and robot orchestras\, visual music\, computers\, and new interfaces for musical expression. He has received several composition prizes and artistic grants both in Chile and the US. He has authored around 70 scientific publications in peer reviewed journals and international conferences. His areas of expertise include sonification\, sound synthesis\, audio digital processing\, computer music\, composition\, new interfaces for musical expression and the musical applications of complex systems. In 2018\, Rodrigo was a composer in residence with the Stanford Laptop orchestra (SLOrk) at the Center for Computer-based Research in Music and Acoustics (CCRMA)\, and a Tinker Visiting Professor at Stanford University. In 2019\, he received the prize of Excellence in Artistic Creation from UC\, given for outstanding achievements in the arts. In 2024\, he was a visiting researcher at the Orpheus Instituut in Belgium. He is currently full professor at the Music Institute and Electrical Engineering Department of UC. \nSince several years Thierry Miroglio is realizing a brilliant solo career where he is invited to give in more than forty countries recitals and solo concerts in numerous venues and prestigious Festivals such as Salzburg\, Philharmonie Berlin\, New York\, Wien Konzerthaus\, Boston\, Besançon\, San Francisco\, Munich\, Schleswig Holstein\, Madrid\, Rom\, Tokyo\, Milan\, Zagreb\, Nice\, Köln\, Paris\, Hamburg\, Athen\, Sao Paulo\, Lisbon\, Monte Carlo Printemps des Arts\, Hong Kong\, Buenos Aires Colon Theater\, Genève\, Brugge Concertgebouw\, Bucarest Atheneum\, Peking\, Amsterdam\, Linz Brucknerhaus\, Rio\, Darmstadt\, Helsinki\, Johannesburg\, Mexico\, Seoul\, Shanghai\, Moscow\, Biennal of Venice … \n  \nSunhuimei Xia and Sunhuimei Xia: The Center of the Universe\nThe Center of the Universe\, an algorithmic music work integrated with interactive technology\, draws inspiration from the artist’s immersive impressions of New York City gleaned through multiple on-site visits. Standing atop the Empire State Building\, the artist perceived the metropolis as a dynamic global nexus where people of diverse cultural and ethnic backgrounds converge\, weaving a vibrant\, multifaceted urban tapestry that resonates with the energy of an interconnected world. Taking the phrase “The Center of the Universe” as its foundational sonic material\, the work delivers innovation through experimental multilingual vocal manipulation—deploying the core line in English\, Spanish\, French\, German\, Italian\, Russian\, Chinese\, Japanese\, Korean\, and Thai—with all vocal textures sourced from sampled macOS AI voices\, blending computational sound synthesis with linguistic diversity to push the conventional boundaries of vocal-based algorithmic composition. It achieves nuanced translation by converting the artist’s subjective perceptual experience of the city into an audible\, interactive sonic landscape\, while translating the abstract idea of cross-cultural convergence into tangible musical logic via the layered interplay of multilingual vocal samples. Further embodying participation\, the piece adopts wireless Nintendo Wiimote Controllers as its interactive performance interface\, enabling the performer to stand at the “center” of the stage and manipulate the musical structure in real time; this design redefines the dynamic between creator\, performer\, and audience\, turning the performance into a collaborative process where physical movements directly shape sonic evolution. \nAbout the artist\nSunhuimei Xia\, Associate Professor of Art and Technology at Wuhan Conservatory of Music’s Composition Department\, Dr. Xia holds a Master’s from Johns Hopkins University and a Doctorate from the University of Oregon (U.S.). Mentored by renowned composers Jian Feng\, Jian Liu\, Geoffrey Wright\, and Jeffrey Stolet.\nAs central and western China’s first DMA in data-driven musical instrument composition and performance\, this accomplished composer focuses on computer music creation and music-technology integration\, with core interests in interactive data-driven instruments\, algorithmic composition\, and data sonification.\nHonored as a Music Entrepreneurship and Innovation Talent by the Ministry of Culture and an Outstanding Young and Middle-Aged Literary and Art Talent by Hubei Federation of Literary and Art Circles\, her works won the Hubei Golden Bianzhong Music Award\, with over 10 pieces showcased at top global events including ICMC\, ISMIR\, NIME\, SMC\, SEAMUS\, NYCEMF\, EMM\, IRCAM\, WOCMAT and Musicacoustica-Beijing.\nShe released China’s first DVD album of data-driven instrument works\, published by Shanghai Music Publishing House and Shanghai Literature & Art Audio-Video Electronic Publishing House. She guided students to secure 20+ domestic and international awards\, leads provincial projects and participates in the Ministry of Education’s Humanities and Social Sciences Youth Fund Project\, driving music-technology innovation.
URL:http://icmc2026.ligeti-zentrum.de/event/concert-1a/
LOCATION:Hamburg University of Technology\, Building I\, Audimax 2\, Denickestraße 22\, Hamburg\, 21073\, Germany
CATEGORIES:11-05,Concert,Music
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260511T153000
DTEND;TZID=Europe/Amsterdam:20260511T160000
DTSTAMP:20260428T185011
CREATED:20260421T083405Z
LAST-MODIFIED:20260421T083405Z
UID:10000146-1778513400-1778515200@icmc2026.ligeti-zentrum.de
SUMMARY:Introduction & Welcome to ICMC HAMBURG 2026
DESCRIPTION:ICMC HAMBURG 2026 welcomes this year’s conference community to Hamburg. On this first full conference day\, the team shares a few words about the week’s program before Robert Henke gives his keynote about his life as a toolmaking artist. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/introduction-welcome-icmc-hamburg-2026/
LOCATION:Hamburg University of Technology\, Building H\, Audimax 1\, Am Schwarzenberg-Campus 5\, Hamburg\, 21073\, Germany
CATEGORIES:11-05,General
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260511T160000
DTEND;TZID=Europe/Amsterdam:20260511T170000
DTSTAMP:20260428T185011
CREATED:20260421T082545Z
LAST-MODIFIED:20260422T143423Z
UID:10000078-1778515200-1778518800@icmc2026.ligeti-zentrum.de
SUMMARY:Keynote | Robert Henke: "My Life as a Toolmaking Artist: A Personal Reflection on the Challenges and Rewards of Building My Own Instruments"
DESCRIPTION:I had the privilege of witnessing—and participating in—the historic shift of computer-generated music from an academic pursuit to something accessible in a bedroom studio. I embraced this opportunity wholeheartedly\, using environments like IRCAM’s Max to explore new sonic and structural territories. This allowed me to move beyond the constraints of physical instruments I could afford\, the limitations of my own hands\, and the rigid mental models of established MIDI sequencing software. \nDriven by a desire to achieve unique and personal results with limited computing power and knowledge\, I came to value the creative freedom found in self-imposed limitations. This experience led to a deep appreciation for simple yet powerful concepts\, algorithms\, and interfaces. \nSince the beginning\, my music emerged from an iterative process: building instruments\, being surprised and inspired by the results\, and then revising the instruments in response. The insights I gained not only informed a successful commercial product but\, more importantly\, shaped my identity as an artist and my approach to computer-based creation. \nIn my talk\, I will examine selected works of mine from a critical toolmaker’s perspective: did I reinvent the wheel again\, or did I achieve an artistic outcome which justifies the effort? \n  \nRobert Henke\nRobert Henke is an artistic toolmaker and a toolmaking artist\, exploring the creative potential of technology. His practice spans musical compositions\, concerts\, large-scale audiovisual installations\, and computer graphics. His work frequently involves inventing custom algorithms and machines\, blending rigid structure with controlled randomness. His music channels the raw\, repetitive energy of techno culture\, as well as the intricate details and textures of abstract contemporary works. His visual art builds on the legacies of Minimal Art and early computer graphics pioneers.\nSince 1995\, he has recorded and performed as Monolake\, initially a duo with Gerhard Behles and\, since 1999\, a solo project. His artistic collaborations include works with Marko Nikodijevic\, Tarik Barri\, and Christopher Bauder\, among others.\nHenke is also a co-creator of Ableton Live\, software that revolutionised music production and electronic performance. He lectures and writes on sound and creative computing\, and has taught at institutions such as the Berlin University of the Arts\, Stanford’s Center for Computer Research in Music and Acoustics (CCRMA) and IRCAM in Paris.\nHis installations\, performances\, and concerts have been presented at leading venues worldwide\, including Tate Modern\, Centre Pompidou\, PS1\, MUDAM\, MAK\, Palazzo Grassi\, and countless music festivals. \nMore about Robert Henke: www.roberthenke.com \n 
URL:http://icmc2026.ligeti-zentrum.de/event/keynote-robert-henke/
LOCATION:Hamburg University of Technology\, Building H\, Audimax 1\, Am Schwarzenberg-Campus 5\, Hamburg\, 21073\, Germany
CATEGORIES:11-05,Keynote
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260511T160000
DTEND;TZID=Europe/Amsterdam:20260511T180000
DTSTAMP:20260428T185011
CREATED:20260415T101718Z
LAST-MODIFIED:20260421T200931Z
UID:10000113-1778515200-1778522400@icmc2026.ligeti-zentrum.de
SUMMARY:[Off-ICMC] Stilt Performance | Oakleaf Streetshow: Insect-o-lectic"
DESCRIPTION:Oakleaf | Photo: Piet Pabst\n  \nA combination of stilt dancing\, body percussion\, and cutting-edge sound technology. The street performance group Oakleaf Streetshow is headed to Harburg with an interactive walking act. Bizarre\, colorful creatures on stilts buzz through the crowd—creatures that not only look like beetles but sound like them\, too.  \nNo registration required \n  \nThe Off-ICMC\nMusic is what brings us together\, even when everything else pulls us apart.\nMusic everywhere—it is part of our everyday lives. And yet\, we’re hearing it performed live on analog instruments less and less. Instead\, it often reaches us through speakers or headphones\, as files\, from the cloud. What does music mean to you? What does it sound like today? Where does it begin—and where does it end?\nThe ligeti center invites you to listen more closely and discover new sounds—to explore\, experiment\, and play. This year\, ICMC HAMBURG 2026 revives an old tradition: the Off-ICMC\, a free and accompanying festival curated for the general public and anyone curious about computer music. \nAll Off-ICMC events are free of charge.  \n\n  \n \n 
URL:http://icmc2026.ligeti-zentrum.de/event/off-icmc-stilt-performance-insect-o-lectic/
LOCATION:Harburg Info\, Hölertwiete 6\, Hamburg\, 21073\, Germany
CATEGORIES:11-05,Off-ICMC
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260511T170000
DTEND;TZID=Europe/Amsterdam:20260511T190000
DTSTAMP:20260428T185011
CREATED:20260421T091309Z
LAST-MODIFIED:20260423T185456Z
UID:10000148-1778518800-1778526000@icmc2026.ligeti-zentrum.de
SUMMARY:Workshop | Serge Lemouton\, Jacques Warnier\, Malena Fouillou\, and Laurent Pottier: Practical Documentation and Collaborative Preservation using Antony
DESCRIPTION:The goal of this hands-on workshop is to show\, for the first time in an international context\, the Antony system\, now in its final state and fully functional.\nThe Antony platform provides a structured system for archiving\, documenting\, and accessing mixed music works materials to ensure long-term preservation and reuse. The Antony project addresses the difficulty of preserving artistic works that rely on evolving and often incompatible technologies. It highlights how the survival of these works depends on a small group of experts capable of updating and maintaining their digital components.\nAt the end of this workshop\, the participants will be able to use the database to document\, distribute and preserve their own creations. \n  \nRequirements\nThis workshop primarily addresses composers\, computer music designers or performers\, but it can also be of interest for media artists\, musicologists\, documentalists and music publishers.\nThe participants should come with the media related to an existing artistic project of their own that they wish to editorialize and preserve. \n  \nWorkshop registration\nPlease register via Pretix in order to participate in the workshop. There are no additional costs.  \n  \n\nAbout the workshop facilitators\nSerge Lemouton \nComputer Music Designer – Institut de Recherche et Création Acoustique/Musique – Centre Georges Pompidou (IRCAM-CGP) \nSince 1992\, Serge Lemouton works as a computer music designer at IRCAM\, collaborates with researchers to develop computer tools and has taken part in the production and public performances of numerous composers’ musical projects. He is currently working on score following systems\, analysis of instrumental gesture and constraint programming for computer assisted composition. His current research work leads him to study the transmission and preservation of the computer music repertoire. \n  \nJacques Warnier \nResearch Engineer\, Ministry of Culture – Computer Music Realizer (RIM)\, Conservatoire National Supérieur de Musique et de Danse de Paris (CNSMDP) \nSince 2007\, Jacques Warnier has supported the composition and new technologies class at CNSMDP\, producing concerts and performing live electronics for mixed repertoire works. After earning the Saint-Etienne Master’s degree in Computer Music Design in 2015\, he joined the Ministry of Culture as a research engineer in 2016.\nHis role combines musicianship and engineering to create the artistic and technical conditions required for performing 20th- and 21st-century music involving audio-digital technologies. His research focuses on making this repertoire accessible to students: curating works by instrument\, acquiring scores and electronic parts\, cataloging them in the Hector Berlioz media library\, and preserving or reconstructing electronic components.\nHe is a member of the AFIM working group on “Collaborative Archiving and Creative Preservation” (since 2018)\, now “Antony\,” and participates in the Humanum consortium for digital musicology (Musica2) since 2022. \n  \nMalena Fouillou \nAn acoustic engineer and computer music producer\, Malena has had a wide-ranging career. After completing her higher education studies in acoustics\, she joined Ircam in 2022 and graduated with a master’s degree in ATIAM (Acoustics\, Signal Processing\, Computer Science for Music). It was only natural that she joined the Next ensemble of the Paris Conservatory\, in partnership with the Ensemble Intercontemporain. This training allowed her to study with distinguished RIMS professors such as Arshia Cont\, Augustin Müller\, and Andrew Gerszo\, and to perform works by Marco Stroppa\, Pierre Boulez\, Martin Matalon\, and others. Currently pursuing her PhD at Paris 8\, her research focuses on qualitative and\nquantitative descriptions of the spatiality of sound. She is part of a working group composed of Serge Lemouton (Ircam)\, Jacques Warnier (CNSMDP)\, Laurent Pottier (ECLLA-UJM) on the Antony project\, a collaborative platform for the preservation and sharing of musical heritage using digital technologies. \n  \nLaurent Pottier \nProfessor of Musicology & Computer Music at Jean Monnet University (Saint-Etienne-France)\, ECLLA laboratory \nLaurent Pottier is a professor of Musicology & Computer Music at UJM (Saint-Etienne University). He is the headmaster of the RIM (Réalisateur en Informatique Musicale / Computer Music Producer) professional Master and of the DIGICREA (Digital Creativity – Arts & Sciences) international EMJM Master. His research at the ECLLA laboratory\, Saint-Etienne University involves music using electronic and digital technologies. He taught at Ircam (1992-1996)\, then\nheaded the research department at GMEM in Marseille (1997-2005). As a RIM\, he has worked with many composers and in particular with J.-B. Barrière\, J. Chowning\, T. De Mey\, A. Liberovicci\, C. Maïda\, A. Markeas\, F. Martin\, T. Murail\, J.-C. Risset\, F. Romitelli\, K.T. Toeplitz. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/workshop-serge-lemouton-et-al-practical-documentation-collaborative-preservation-using-antony/
LOCATION:Hamburg University of Technology\, Building H (H 0.02)\, Am Schwarzenberg-Campus 5\, Hamburg\, 21073\, Germany
CATEGORIES:11-05,Workshop
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260511T180000
DTEND;TZID=Europe/Amsterdam:20260511T190000
DTSTAMP:20260428T185011
CREATED:20260415T101813Z
LAST-MODIFIED:20260417T114349Z
UID:10000114-1778522400-1778526000@icmc2026.ligeti-zentrum.de
SUMMARY:[Off-ICMC] Concert: Nenad Nikolić – Accordeon meets Techno
DESCRIPTION:Photo: Boris Las Opolski\n  \nNenad Nikolić was born in Serbia and has always been fascinated by his father and grandfather’s accordion playing. But mechanical sounds are from the past. Nenad plays without backing tracks\, performing every single tone live—from “tango to techno.” Don’t miss this chance to see him push the boundaries of his instrument with his electronic accordion.  \nNo registration required  \n  \nThe Off-ICMC\nMusic is what brings us together\, even when everything else pulls us apart.\nMusic everywhere—it is part of our everyday lives. And yet\, we’re hearing it performed live on analog instruments less and less. Instead\, it often reaches us through speakers or headphones\, as files\, from the cloud. What does music mean to you? What does it sound like today? Where does it begin—and where does it end?\nThe ligeti center invites you to listen more closely and discover new sounds—to explore\, experiment\, and play. This year\, ICMC HAMBURG 2026 revives an old tradition: the Off-ICMC\, a free and accompanying festival curated for the general public and anyone curious about computer music. \nAll Off-ICMC events are free of charge.  \n\n  \n \n 
URL:http://icmc2026.ligeti-zentrum.de/event/off-icmc-concert-nenad-nikolic-accordeon-meets-techno/
LOCATION:Harburg Info\, Hölertwiete 6\, Hamburg\, 21073\, Germany
CATEGORIES:11-05,Music,Off-ICMC
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260511T190000
DTEND;TZID=Europe/Amsterdam:20260511T210000
DTSTAMP:20260428T185011
CREATED:20260421T085527Z
LAST-MODIFIED:20260427T085012Z
UID:10000079-1778526000-1778533200@icmc2026.ligeti-zentrum.de
SUMMARY:Evening Concert 1B
DESCRIPTION:This evening concert marks a special collaboration between the international ICMC community and Hamburg’s music scene. At its center is Ensemble 404 from the Hamburg University of Music and Drama (HfMT). For this occasion\, a video wall will be specially installed in the Friedrich-Ebert-Halle to highlight the synergy between sound and image.\nThe program ranges from intimate solo pieces with computer support to complex ensemble compositions and large-scale video works. \n  \nProgram Overview\nFantasy for Viola and Computer\nRichard Dudas \nNeuro Translation Engine\nVincenzo Russo \nClimate II for piano and computer \nRikako Kabashima \nWind Blown Rain\nMara Helmuth\, Esther Lamneck and Alfonso Belfiore \nDelicate Anticipation\nKotoka Suzuki \nAir-Carving Bamboo\nYu Chung Tseng \n  \nAbout the pieces & artists\nRichard Dudas: Fantasy for Viola and Computer\nThis work for solo viola and real-time audio processing in Max is a composed extension of some prior improvisational works using Max. It was written in part as an exploration of Bohlen-Pierce tuning (in the electronics)\, which divides the perfect twelfth into thirteen unequal justly-tuned steps. The viola part is pitted against this\, performing in standard twelve-unequal-steps-to-the-octave tuning\, juxtaposing and combining several different musical fragments\, each with its own character and mood. All sounds in the electronics are live: they are derived from the sounds of the on-stage violist. Max audio processing includes formant filtering to provide a vocal quality to the transposed and resonated viola sounds. \nAbout the artist\nRichard Dudas holds degrees in Music Composition from The Peabody Conservatory of Music of the Johns Hopkins University\, and from The University of California\, Berkeley. He additionally studied at the Franz Liszt Academy of Music in Budapest\, Hungary and the National Regional Conservatory of Nice\, France. In addition to composing music for acoustic instruments\, he has been actively involved with music technology since the late 1980s. As a computer musician\, he has taught courses at IRCAM\, and developed musical tools for Cycling ’74. Since 2007 he has been teaching music composition and computer music at Hanyang University in Seoul\, Korea. \n  \nVincenzo Russo: Neuro Translation Engine\nIn the future\, global societies remain marked by a multitude of languages\, dialects\, idiolects\, and diverse phonetic and cultural systems. Despite advances in AI-driven translation\, fundamental limits persist in the loss of emotional nuance\, imprecise interpretations\, and gaps between what is said and what is perceived. A team of computational linguists and neuroscientists develops an advanced artificial entity: the Neuro Translation Engine (NTE)\, capable of surpassing traditional textual or acoustic translation. The NTE does not translate words\, but the neural intentions behind language. It stimulates a specific area of the human brain\, the resonance cortex\, designed to receive universal neurosensory patterns. The result is a world where everyone can speak their native language while perfectly understanding others. Linguistic diversity is not diminished but enriched through mutual comprehension. The composition for ensemble and electronics illustrates how the NTE processes\, transforms\, and reconstructs communicative material. Through sound transformation techniques\, the acoustic material is dematerialized\, representing the machine’s “internal work”: the conversion of complex signals into a unified code. The final sound is entirely electronic\, devoid of recognizable references to the original ensemble. It forms a new language\, perceived as a pattern directly interpreted by the brain. \nAbout the artist\nVincenzo Russo (1995) holds a bachelor’s degree in Business Administration from the University of Naples “Parthenope.” He began his musical studies in Composition for Visual Media at the San Pietro a Majella Conservatory in Naples under the guidance of the late Maestro Lucio Lo Gatto. In July 2025\, he completed the second-level degree (Master’s degree) in Composition. Alongside his academic work\, he is active as a composer\, arranger\, and music producer\, working from his own recording studio. \n  \nRikako Kabashima: Climate II for piano and computer \nThis work was composed based on a variety of ideas inspired by climate change. In recent years\, translating insights from the natural world into my own compositions has become an important experiment in my creative practice.\nIn particular\, this piece draws inspiration from the rapid climate fluctuations caused by global warming\, a pressing issue worldwide. Each measure in the work is specified in seconds rather than traditional beats\, and there is no fixed meter. Within each measure\, rhythms are performed improvisationally according to the given duration.\nThis approach allows for different rhythms and nuances to emerge in every performance\, reflecting the ever-changing nature of the climate itself. \nAbout the artist\nRikako Kabashima was born in Kagoshima\, Japan\, in 1996. She began studying piano at the age of three and later pursued composition at Senzoku Gakuen College of Music in Tokyo. After completing her undergraduate studies in 2021\, she entered the master’s program in composition at Toho College of Music\, where she studied with Kazuro Mise and Hitomi Kaneko\, and explored computer music under the guidance of Takayuki Rai. She earned her master’s degree in March 2025.\nHer works have been selected at international festivals including the New York City Electroacoustic Music Festival (NYCEMF) in 2023\, the International Computer Music Conference (ICMC) in 2023\, 2024\, and 2025. \n  \nMara Helmuth\, Esther Lamneck and Alfonso Belfiore: Wind Blown Rain\nWind Blown Rain was inspired by natural processes and forces involving water. Water metamorphoses between many opposing states: from a gentle drizzle to a stormy downpour\, from a tiny droplet to a crashing ocean. Life on earth is dependent on water\, and also at its mercy. This piece focuses mainly on the transformed sounds of rain\, and its reflections in the tárogató sound. Samples were recorded in Venice and Ascea\, Italy. The music was composed in Italy in the summer of 2025 at Wassard Elea Artist’s residency in Ascea by a computer music composer and a performer/real time composer. While most of our previous collaborations have relied solely on the sound of the performer’s instrument for the computer part\, in this piece the instrumentalist interacts primarily with music created from natural recordings and their processed transformations. A third artist created the video part in response to the music from his own water-related video recordings. The video component of Wind Blown Rain is a visual meditation on the natural landscape\, filtered through the inner rhythm of rainfall. Created with images generated and modified using artificial intelligence\, the editing alternates slow-motion sequences\, crossfades\, and subtle variations to evoke a dilated sense of time. The environment\, immersed in rain\, transforms gradually\, suggesting a fragile balance between presence and dissolution. The visual work accompanies the music as a mental landscape—fluid and contemplative. \nAbout the artists\nMara Helmuth (b. 1957)\, internationally known computer music composer/researcher\, received a Guggenheim Fellowship in 2025. Her research explores sonification\, granular synthesis\, wireless sensor networks\, Internet2\, and RTcmix. She is Professor at College-Conservatory of Music\, University of Cincinnati\, where she received the George Rieveschl Award for Scholarly / Creative Works at in 2023. She served on the International Computer Music Association board of directors and as President. D.M.A.: Columbia Univ.\, earlier degrees: Univ. Ill. U-C. \nEsther Lamneck\, Clarinet and Tarogato\nThe New York Times calls Esther Lamneck\, “an astonishing virtuoso”She has appeared as a soloist with major orchestras\, with renowned chamber music artists and an international roster of musicians from the new music improvisation scene. http://www.estherlamneck.com/ \nAlfonso Belfiore is a composer and visual artist whose work explores the relationships between sound\, image\, movement\, and perception. Former professor of electronic music at the Conservatories of Florence and Padua\, he has collaborated with international institutions\, creating performances\, sound installations\, and multidisciplinary projects that merge musical innovation with digital art. His recent work investigates memory\, dreamlike space\, and the fragile line between reality and imagination. \n  \nKotoka Suzuki : Delicate Anticipation\nThis work is written as part of the series “In Praise of Shadows\,” inspired by Junichiro Tanizaki’s essay of the same title\, written at the birth of the modern era in imperial Japan. The essay describes how shadows and negative space are integral to traditional Japanese aesthetics in music\, architecture\, and food\, extending even to the design of everyday objects. As Tanizaki explains\, “We find beauty not in the thing itself but in the patterns of shadows\, the light and the darkness\, that one thing against another creates… Were it not for shadows\, there would be no beauty.” \nThe focus of the first of its sequence\, “In Praise of Shadows” for three paper players and electronics is placed on the collective loss of the tangible in our modern life\, analogues to how the excessive illumination of Edison’s modern light affect Japanese aesthetics and culture. Following this work\, “Orison” is composed for three music box players and electronics. The work is further inspired by the voices of children of war\, both from past and present\, speaking and singing about hope\, peace as well as sorrows arising from their personal experiences. These melodies\, presented as empty spaces on the music score\, reveal as they are fed through the music boxes. \nIn the third part of the sequence\, “Delicate Anticipation\,” written for a solo percussionist\, electronics\, and lights\, shadow is the central focus\, honouring the “patterns of shadows\, the light and the darkness\, that one thing against another creates”. Positioned behind the scrim\, the percussionist is only visible as a shadow while performing with lights and instruments primarily of metal and skin\, manipulating patterns of carefully choreographed shadows. The title derives from the English translation of the essay\, which describes the sensation of gazing at the silent liquid in the dark depths of a Japanese lacquerware bowl. As Tanizaki writes\, “What lies within the darkness one cannot distinguish…. …the fragrance carried upon the vapor brings a delicate anticipation.” \nAbout the artists\nKotoka Suzuki’s work engages deeply with the visual\, conceiving of sound as a physical form to be manipulated through the sculptural practice of composition. Artists such as the Arditti Quartet\, Eighth Blackbird\, Nouvel Ensemble Moderne\, and Mendelssohn Chamber Orchestra (Leipzig) have featured her work internationally through numerous venues and broadcasts\, including BBC Radio 3\, Schweizer Radio\, Lucerne Festival\, Heroin of Sound Festival\, Ultraschall\, and ZKM Media Museum. Suzuki is currently an Associate Professor at the University of Toronto. \nMichael Murphy is a Chinese-Canadian percussionist praised by The New York Times\, Opera Canada\, and The Herald. He has toured across North America\, Europe\, Scandinavia\, and Asia\, performing with ensembles including the Toronto Symphony Orchestra\, the National Ballet of Canada Orchestra\, and Philharmonisches Orchester Freiburg. A leading advocate for new music\, he has premiered concertos by Alice Ping Yee Ho\, Liam Ritz\, and Bob Becker and champions contemporary repertoire internationally. \n  \nYu Chung Tseng: Air-Carving Bamboo \n“Air-Carving Bamboo Music” premiered at the 2025 C-LAB Sound Arts Festival_DIVERSONICS . This work is an Acousmatic / electroacoustic music. The material comes from the composer’s field recordings of bamboo colliding on the shores of Emei Lake in his hometown of Hsinchu County in Taiwan. Through editing and transformation using DAW software\, and incorporating feedback material from AI Somax 2 on some of the bamboo collision rhythms\, the work was finally organized into an electroacoustic music piece.\nIn terms of performance style\, the composer wanted to differentiate themselves from traditional purely played electroacoustic music\, creating a synesthetic aesthetic experience for both the ears and eyes\, and letting electroacoustic music visible .\nThe composer invited percussionist Hsieh Yi-chieh to wave glow sticks in the dark\, as if drawing out or sculpting the electroacoustic music in air\, a technique akin to “grabbing music from a distance.” This presentation method\, besides giving electroacoustic music a performative quality\, greatly enhances the visual appeal\, auditory appeal\, and sonic dramatic tension of the performance. Postscript: Having composed electroacoustic music for more than 2 decades\, the composer occasionally wants to dabble in this area\, slightly transcending the aesthetic/philosophical view of “sound-only/purely auditory” in Acousmatic / electroacoustic music listening. \nAbout the artist\nYu-Chung Tseng\, receiving his DMA from UNT in Texas\, is a professor of electronic music composition and serves as the director of multi-channel Sound Lab at Institute of Music at National Yang Ming Chiao Tung University(NYCU) in Taiwan. \nHis music\, written for both acoustic and electronic media\, has been recognized with selection/awards from Pierre Schaeffer International Computer Music Competition (1st Prize/2003)\, Città di Udine International Contemporary Music Competition\, Musica Nova (First Prize/2010)\, Metamorphoses\, International Computer Music Conference(ICMC\, Best Music Award/2011/2015/2022)\,Taukay Edizioni Musicali call for Acousmatic Music(Winner/2019)\, and RMN Classical Electroacoustic call for work(Winner/2023)\,Polish International Electroacoustic Music Competition (Finalist/2023)\, KLANG International Acousmatic Composition Competition(Second Prize/2023) \, and Musica Nova (First Prize/2010). \n 
URL:http://icmc2026.ligeti-zentrum.de/event/concert-1b/
LOCATION:Friedrich-Ebert-Halle\, Alter Postweg 34\, Hamburg\, 21075\, Germany
CATEGORIES:11-05,Concert,Music
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260511T213000
DTEND;TZID=Europe/Amsterdam:20260511T233000
DTSTAMP:20260428T185011
CREATED:20260421T145800Z
LAST-MODIFIED:20260423T185733Z
UID:10000067-1778535000-1778542200@icmc2026.ligeti-zentrum.de
SUMMARY:Club Concert 1C
DESCRIPTION:Immerse yourself in a 20.8-channel sound world: in the Production Lab of the Ligeti Center\, neural synthesis\, artificial intelligence\, and interactive visuals merge into an immersive live experience. International artists present innovative prototypes—from AI-augmented string instruments to dynamic graphic scores. \n  \nProgram Overview\nZwischenheit \nRiccardo Ancona \nKnitting\nBrian Lindgren \nSonic Memories: A Live Coding Performance with Machine-Learned Sound Fragments\nRiccardo Mazza \nGradient Noise: Animated Scores with Corresponding Data Streams\nJohn C.S. Keston \nFluid Ontologies\nNicola Leonard Hein and Viola Yip \nOn The Edge\nKasey Pocius \nScarittera – Subterranean Eruptions of Sonic Memory\nDanilo Randazzo \n\n\n  \nAbout the pieces & artists\nRiccardo Ancona: Zwischenzeit \n\nContemporary neural audio research frames “music understanding” as a computational task. What does it mean for a machine to listen and understand a sonic context? Zwischenheit (2025) is an audiovisual performance that aims at finding a speculative\, empirical\, situated answer. The projection shows the performer having an improvisational dialogue with an algorithmic system composed of an audio captioner and a local language model. While the sound piece unfolds\, it reveals a complex scenario made of overlapping soundscapes. The language model is prompted to interpret the music as it flows\, trying to provide a nuanced understanding of the sonic situation. The human performer\, on the other hand\, is both inquisitive and reflective: at which threshold does the language model begin to appear as an agent of mystification? What does agency without consciousness reveal about listening? The outcomes of the dialogue change at every performance\, as there is a certain degree of stochasticity in the model’s replies\, but they always point at critical aspects of sonic hermeneutics and computational cognition. Embodiment\, contingency\, and situatedness emerge as essential characteristics of human listening that contemporary neural networks cannot embed. Zwischenheit is thus an attempt at investigating the performative possibilities that emerge at the intersection between post-acousmatic music\, music information retrieval\, and generative AI through an analytical self-reflection. \nAbout the artist\nRiccardo Ancona is a sound artist and PhD researcher in musicology of algorithmic music at the University of Bologna. He studied at CREA (Frosinone) and at the Institute of Sonology (Den Haag)\, where he specialized in algorithmic improvisation. His research focuses on computational aesthetics\, archival study of computer music\, and the sociology of neural audio technologies. He also curates Miniature Recs. \n  \nBrian Lindgren: Knitting \nKnitting is a new work for the EV\, an augmented bowed string instrument that integrates IRCAM’s RAVE (Realtime Audio Variational autoEncoder) neural synthesis model. The composition explores how machine learning can extend the timbral vocabulary of a traditional gestural practice—not by imposing external sonic material\, but by folding the instrument’s own acoustic identity back through a neural lens. \nThe EV combines a 3D-printed body with four infrared optical pickups whose signals are processed by a Bela board and transmitted to a laptop running Pure Data. Each string controls an independent synthesis engine comprising convolution\, physical modeling\, granular processing\, reverb\, and ambisonic spatialization. The recent addition of RAVE introduces a self-referential pathway: the model was trained on four hours of the EV’s own recordings\, creating a system that listens to itself through learned representations of its sonic history. \nCentral to this integration is a control strategy that maps performance descriptors—fundamental frequency\, amplitude\, and spectral centroid—to specific dimensions of the model’s eight-dimensional latent space. By constraining each modulation source to a single latent dimension\, the relationship between gesture and neural response becomes legible: a shift in bow pressure or position translates into a navigable timbral trajectory rather than an opaque transformation. This approach distinguishes the EV from other RAVE-integrated instruments\, which often emphasize loop-based or tabletop interfaces rather than continuous bowed-string control. \nKnitting treats this latent space as a landscape of sonic possibility\, each dimension a potential resonance between physical gesture and synthesized response. The compositional process is less one of arranging fixed materials than of cultivating emergent textures—drawing out sonic filaments\, crossing and interlacing them\, balancing tensions across the tapestry. The neural model functions as a meta-resonator: a parallel pathway that refracts the instrument’s timbral identity through an alternate causal route\, revealing aspects of its sound that remain latent in conventional electroacoustic processing. \nThe work demonstrates how neural synthesis can be embedded within a hybrid instrument ecology\, extending expression beyond pitch and amplitude to make performance descriptors direct agents of timbral transformation. By grounding latent navigation in the acoustic features of bowed-string technique\, Knitting positions machine learning not as a replacement for embodied practice but as an expansion of its expressive range. \nAbout the artist\nBrian Lindgren (1983) is a composer\, researcher\, violist\, and instrument builder whose work explores the convergence of acoustic performance and digital synthesis through the EV\, a hybrid string instrument integrating lutherie and embedded computing. \nHis compositions and research have been featured at the International Computer Music Conference (ICMC)\, New Interfaces for Musical Expression (NIME) conference\, Conference on Neural Information Processing Systems (NeurIPS)\, Society for Electro-Acoustic Music in the United States (SEAMUS)\, IRCAM Forum\, and International Conference on Auditory Display (ICAD)\, as well as published in Organised Sound. His work has been performed by ensembles including HYPERCUBE\, LINÜ\, Popebama\, and Tokyo Gen’on Project. \nThe EV was a finalist in the 2026 Guthman Musical Instrument Competition and used to compose ‘two tales from the shadows of the grid’ which won first place at the IEEE Big Data 2025 3rd Workshop on AI Music Generation Competition. \nLindgren holds an MFA in Sonic Arts from Brooklyn College (Subotnick\, Geers\, Gimbrone)\, a BA from the Eastman School of Music (Graham)\, and is pursuing a PhD at the University of Virginia (Burtner). \n  \nRiccardo Mazza: Sonic Memories: A Live Coding Performance with Machine-Learned Sound Fragments \nDrawing from Henri Bergson’s concept of *durée* and Deleuze’s rhizomatic models\, “Sonic Memories” reimagines memory not as a linear chronological archive\, but as a stratified field of coexisting planes. In this live coding performance\, autobiographical sound fragments—from mechanical gears to lagoon soundscapes and fragile voices—are liberated from their timeline and reorganized by an autoencoder into a non-hierarchical\, navigable map. \nThe performance begins with the simple act of loading a personal audio file—a field recording from a journey\, a voice memo\, a musical fragment—into a computational system that immediately begins to analyze and reorganize these sonic memories according to its own logic. \nOn stage\, the audience sees everything: the code acting in real-time\, a visual map where memories become points in space\, oscilloscopes showing the transformation of sound waves. This transparency is essential—there is no mystification of the technological process\, but rather an invitation to witness the negotiation between human remembering and algorithmic interpretation. \nThe performer navigates this latent space using SuperCollider and FluCoMa\, triggering both the original “concrete” traces and their AI-generated “distorted echoes.” The algorithm serves not as an autonomous agent\, but as a refracting lens\, forcing the performer to negotiate between faithful recall and neural hallucination. The result is a fragile dialogue between the fixity of the past and the malleability of the present\, exploring how computational tools can actualize memory as a living\, reconstructive act. \nThe work asks: How do we perform memory in an age of machine learning? Not by having machines remember for us\, but by creating dialogues with computational systems that reorganize our experiences according to their own logic\, forcing us to rediscover our own histories through unfamiliar maps. \nAbout the artist\nRiccardo Mazza (Turin 1963). Composer\, multimedia artist\, and faculty member at the Scuola di Alto Perfezionamento Musicale di Saluzzo. He collaborates with SMET (Electronic Music School) at the Conservatorio di Torino and the Conservatorio Ghedini in Cuneo\, and is internationally recognized for his research in psychoacoustics and spatial audio.\nIn 1997 he began a collaboration with Franco Battiato\, focusing on new technologies for sound. Between 1999–2000 he created the Renaissance SFX library\, the first Dolby Surround encoded spatial effects and field recording collection for cinema and television. Later developed SoundBuilder\, software for object-based surround design presented at AES 2003 in San Francisco\, which anticipated Dolby Atmos.\nHe founded Interactive Sound in 2001\, a research studio dedicated to multimedia exhibitions and immersive installations\, and in 2003 patented a psychoacoustic model of “sleep waves.” With Laura Pol\, he co-founded Project-TO (2015)\, an electronic and visual project that has released four albums and appeared at major festivals including TFF\, TJF\, Robot\, Share Festival.\nSince 2018\, he directs Experimental Studios in Turin\, one of Europe’s leading Dolby Atmos recording facilities. His current project Sonic Earth explores environmental sonification and algorithmic composition\, and has been presented internationally at ICMC 2025 in Boston\, FARM/SPLASH 2026 in Singapore\, SBCM 2025 (Brazil)\, IEEE 2025 (L’Aquila). \n  \nJohn C.S. Keston Gradient Noise: Animated Scores with Corresponding Data Streams\nSince 2019 I have been composing animated graphic scores for ensembles and soloists. These generative works are projected for both the performers and audience to experience. Custom software runs during the performance to create the computer graphics and geometric forms. Rules are established on how the forms are read\, but improvisation and the emotional response of the performer still play an integral part in each piece. Fixed media of this work does not suffice because it lacks the realtime\, generative\, and participatory aspects that create surprise and challenges for the performers. \nMore recently I began composing scores that not only generate animated visuals\, but also stream corresponding MIDI data that impacts the timbre and signal processing of the electronic instruments used by the performers. The instruments are either hardware based synthesizers or virtual instruments within a DAW such as Ableton Live. One of my recent compositions applies these streams of data to four layers of FM synthesis engines running within the Dirtywave M8\, a technically advanced\, modern\, hardware tracker. \nMy newest work in progress\, Gradient Noise\, translates values generated by the Perlin noise algorithm into independent layers of seamless loops repeating at variable intervals. These loops are visualised as geometric forms\, abstract visualisations\, and evolving structures. The data generated is innovative because although aleatoric\, the values can be tuned to range between slowly moving gradients or rapid\, angular forms. When the sound and visuals are synchronized the performer responds not only to the animation but also to the changes in the timbre of their instruments. \nThe debut of Gradient Noise will address the themes of Innovation\, Translation\, and Participation by rethinking the relationships between musicians and machines. By translating the properties of n-dimensional Perlin noise into a musical language\, the piece presents a unified ecosystem with coordinated timbres and geometric forms. The innovation lies in generating a living environment that requires active participation and improvisation in contrast to static notation. Ultimately\, the work presents a contemporary model for computer music where the performer does not simply follow a score\, but negotiates a path through a responsive\, multi-sensory experience. \nAbout the artist\nJohn C.S. Keston is an award winning transdisciplinary artist reimagining how music\, video art\, and computer science intersect. His work both questions and embraces his backgrounds in music technology\, software development\, and improvisation leading him toward unconventional compositions that convey a spirit of discovery and exploration through the use of graphic scores\, chance and generative techniques\, analog and digital synthesis\, experimental sound design\, signal processing\, and acoustic piano. Performers are empowered to use their phonomnesis\, or sonic imaginations\, while contributing to his collaborative work. Keston founded the sound design resource\, AudioCookbook.org\, where you will find articles and documentation about his projects and research. \nJohn has spoken\, performed\, or exhibited original work at SEAMUS (2025)\, Radical Futures (2024)\, New Interfaces for Musical Expression (NIME 2022)\, the International Computer Music Conference (ICMC 2022)\, the International Digital Media Arts Conference (iDMAa 2022)\, International Sound in Science Technology and the Arts (ISSTA 2017-2019)\, Northern Spark (2011-2017)\, the Weisman Art Museum\, the Montreal Jazz Festival\, the Walker Art Center\, the Minnesota Institute of Art\, the Eyeo Festival\, INST-INT\, Echofluxx (Prague)\, and Moogfest. In 2017 he was commissioned by the Walker Art Center to compose music for former Merce Cunningham. He has appeared on more than a dozen albums\, solo albums\, and collaborative works. \nNicola Leonard Hein and Viola Yip: Fluid Ontologies\nIn “Fluid Ontologies”\, Transsonic (Nicola Leonard Hein and Viola Yip) continues to expand their intermedial artistic practice in performances. For this project\, they developed their laser feedback instruments\, using lasers as sound sources and solar panel microphones. With the incorporation of multichannel spatialization\, Transsonic extends the spatial dimensions\, sonically and visually\, creating a unique audiovisual experience. The project explores and defines new concepts of the instrumentality of light in audio circuits\, bringing together space\, bodies\, and instruments into a dynamic feedback system. \nAbout the artists\nDr. Nicola L. Hein is a sound artist\, guitarist\, composer\, researcher\, programmer\, and professor of Sound Arts and Creative Music Technology at the University of Music Lübeck.\nHe works with A.I.-assisted human-machine interaction\, postdigital lutherie\, intermedia\, sound installations\, augmented reality\, network music\,and spatial audio. His works have been realised in more than 30 countries\, at festivals such as MaerzMusik Festival\, Sonica Festival\, Experimental Intermedia etc. \nDr. Viola Yip is an experimental performer\, sound artist and instrument builder.\nHer work have been presented and supported by places such as Stanford University\, UC Berkeley\, Harvard University\, Cycling ‘74 Expo\, Hong Kong Arts Center\, Academy of Media Arts Cologne\, Academy of the Arts Berlin\, KTH Royal Institute of Technology Sweden\, Elektronmusikstudion EMS Stockholm\, NOTAM Oslo\, Arter Museum Istanbul\, Serralves Museum of Contemporary Arts Porto and Pinakothek der Moderne in Münich. \nviolayip.com \n  \nKasey Pocius: On The Edge \nOn the Edge is an audiovisual work for video\, T-Stick and surround sound. This audiovisual work explores sounds and images of objects often on the edges of perception our perceptions\, as well as processing and results from edge cases in musical algorithms and technology. \nThe piece consists of four interlayered vignettes\, exploring the behaviour and textural qualities of various edge and peak detection algorithms to create the fixed media. These files are then used for the corpus for the granular synthesis controlled by the T-Stick. The gestural data from the T-Stick is sent from Max to Ossia\, where it is used to manipulate the treatment of the video clips in real-time. \nThe technical aspects of the work consist of a fixed-media ambisonic file\, with real-time manipulation of video clips (in Ossia Score) and multichannel granular synthesis (in Max) controlled by the T-Stick. \nAbout the artist\nKasey Pocius is a gender-fluid intermedia artist and researcher based in Montreal\, teaching at Concordia and active with CIRMMT\, IDMIL\, LePARC\, and GRMS. They create electroacoustic and audiovisual works that explore interactive electronics\, spatial sound and collaborative improvisation\, with pieces programmed globally from DIY spaces to Harvard. \n  \n\n\n\n 
URL:http://icmc2026.ligeti-zentrum.de/event/club-concert-1c/
LOCATION:ligeti center\, Production Lab (10th floor)\, Veritaskai 1\, Hamburg\, 21079\, Germany
CATEGORIES:11-05,Club Concert,Music
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
END:VCALENDAR