BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//ICMC HAMBURG 2026 - ECPv6.15.20//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:http://icmc2026.ligeti-zentrum.de
X-WR-CALDESC:Events for ICMC HAMBURG 2026
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:Europe/Amsterdam
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20250330T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20251026T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20260329T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20261025T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20270328T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20271031T010000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260510T193000
DTEND;TZID=Europe/Amsterdam:20260510T220000
DTSTAMP:20260429T121824
CREATED:20260421T081038Z
LAST-MODIFIED:20260423T112509Z
UID:10000070-1778441400-1778450400@icmc2026.ligeti-zentrum.de
SUMMARY:Opening Concert
DESCRIPTION:Program Overview\nIntroduction \nAlexander Schubert – SCANNERS (2013)\nfor string quintet\, choreography\, and electronics (12 min) \nNicole Brady – Ricochet (World Premiere 2026)\nfor chamber orchestra (10 min) \nAnthony Paul De Ritis – Filters (2015 / 2026)\nfor alto saxophone\, string orchestra\, and live electronics (10 min) \nIntermission (25 min) \nAigerim Seilova / Steffen Lohrey – Breath Mechanics (World Premiere 2026)\nfor two soprano saxophones\, string ensemble\, and live electronics (10 min) \nClarence Barlow – Im Januar am Nil (1985)\nfor ensemble (approx. 25 min) \nShort break (10 min) \nClosing & Conference Information (15 min) \n  \nPerformers\nEnsemble Resonanz – strings\nAsya Fateyeva – saxophone\nVlatko Kučan – saxophone\nJohn Eckhardt – double bass\nDulguun Chinchuluun – piano\nLin Chen – percussion \nConductor\nFriederike Scheunchen \nFind out more about the musicians playing at ICMC HAMBURG 2026 here.  \n  \nAbout the pieces\nAlexander Schubert: SCANNERS (2013)\nfor string quintet\, choreography\, and electronics \nThe piece SCANNERS copes with the physical qualities of instrumentalists in electro-acoustic music. It is a choreographed composition\, that takes movement as important as sound. The string ensemble turns into a performing machine. The main focus is on the movement of scanning – as well in the interaction of bow and instrument when producing sound as also in purely artificial gestures. There is no difference between musically necessary or choerographically determined mouvement. The piece can be seen as a comment on the relationship of man to digital content: the direct consequences of action can’t be explained by simple cause and effect principles any more\, the musicians become puppets or at least a part of a complex machine. At the same time the piece offers a special focus on the highly specialized genre of the string orchestra: the mechanizing emphasizes the accuracy of the interpreter and the elegance of the traditional movement\, here being staged independently from the production of sound.\nScanners belongs to a series of compositions that deal with physicality\, as there is e.g. Point Ones with interactive conductor or LaPlace Tiger with a sensor-wired drummer. \nAbout the composer\nAlexander Schubert (1979) studied bioinformatics\, multimedia composition. He’s a professor at the Musikhochschule Hamburg. Schubert’s interest explores the border between the acoustic and electronic world. In music composition\, immersive installation and staged pieces he examines the interplay between the digital and the analogue. He creates pieces that realize test settings or interaction spaces that question modes of perception and representation. Continuing topics in this field are authenticity and virtuality. The influence and framing of digital media on aesthetic views and communication is researched in a post-digital perspective. Recent research topics in his works were virtual reality\, artificial intelligence and online-mediated artworks. Schubert is a founding member of ensembles such as “Decoder“. His works have been performed more than 700 times in the last few years by numerous ensembles in over 30 countries. \n  \nNicole Brady: Ricochet (World Premiere 2026)\nfor chamber orchestra and live electronics \nRicochet explores the idea of deviation from an expected path after an initial impact\, leading to new directions. Inspired by the ricochet bowing technique\, this concept unfolds both physically and metaphorically within the ensemble.\nA responsive electronic system listens to the orchestra and generates a parallel sonic layer. Energetic passages produce scattered\, percussive textures\, while quieter material leads to dense\, sustained sound fields. The system alternates between listening and generative modes\, interacting closely with the performers.\nSubtle references to composers such as Couperin\, Ravel\, and Mozart connect historical material with contemporary sound\, while the electronics act as an additional\, autonomous voice within the ensemble. \nAbout the composer\nNicole Brady is an award-winning composer and creative director whose work spans concert music\, immersive installation\, and video game franchises including Final Fantasy\, Tekken\, and Valkyria Chronicles. Her work has been honoured by the Peabody Awards and IndieCade\, and her immersive sound album Lost Palace was released with the Royal Scottish National Orchestra. Recent commissions and performances include the Omega Ensemble\, Melbourne Symphony Orchestra\, Flinders Quartet\, and Lyris Quartet. As creative director of WLDR studio\, her immersive multisensory works have reached over 20\,000 participants across Illuminate Adelaide and Spier Light Art Festival. Nicole is a researcher at the Melbourne Conservatorium of Music and recipient of the Director’s Award for Exceptional Doctoral Research. \n  \nAnthony Paul De Ritis: Filters (2015 / 2026)\nfor alto saxophone\, string orchestra\, and live electronics \nOriginally composed for alto saxophone and electronic playback\, Filters explores the layering and spatial diffusion of sound. Recorded saxophone material creates a “second” voice\, blending with the live soloist into a unified\, resonant field.\nIn this version for saxophone\, string orchestra\, and multi-channel electronics\, the ensemble extends these layers\, producing a rich interplay between live instruments and their electronically mediated “shadows.”\nThe solo saxophone remains at the expressive center\, while the surrounding textures generate depth\, movement\, and an immersive spatial experience. \nAbout the composer\nDescribed as a “genuinely American composer” (Gramophone)\, “a bit of a visionary” (Audiophile Audition)\, and “bracingly imaginative” (The Boston Globe)\, Anthony Paul De Ritis has received performances around the world\, including at Lincoln Center\, Beijing’s Yugong Yishan\, Seoul’s KT Art Hall\, the Italian Pavilion at the 2015 World Expo in Milan\, and UNESCO headquarters in Paris. \nDe Ritis’s 2012 release “Devolution” by the GRAMMY® Award-winning Boston Modern Orchestra Project\, featuring Paul D. Miller aka DJ Spooky as soloist\, was described as a “tour de force” (Gramophone); and his “Pop Concerto” (2017) featuring Eliot Fisk was lauded as “a major issue of American music\,” (Classical CD Review) and his “Electroacoustic Music – In Memoriam: David Wessel” (2018) was cited as among the “Best of 2018” in the electronic music category (Sequenza 21). \nHe holds a Ph.D. from the University of California\, Berkeley\, and is Professor at Northeastern University\, where he co-founded the music technology program. \n  \nAigerim Seilova and Steffen Lohrey: Breath Mechanics (World Premiere 2026)\nfor two soprano saxophones\, string ensemble\, and live electronics \nThis work is a composition for two soprano saxophones\, string ensemble (4.4.4.2)\, and 8.1 live electronics\, submitted for the ICMC Special Call 1: Ensemble Resonanz . The piece serves as a spectral dialogue with Clarence Barlow’s Im Januar am Nil\, adopting his strategies of timbral fusion and hocketing but transposing them into the age of Machine Learning. The central material is derived from “ChordsNest\,” a multiphonics palette extension for MaxScore\, which is repurposed here as a training set for a neural network. The compositional core is an “AI Translation Error” in which the model was tasked with reconstructing the cylindrical bore spectra of the digital archive using the conical bore of the live saxophones and the acoustic textures of the string ensemble. \nThe resulting score is a transcription of the AI’s “hallucinations\,” where the ensemble physically replicates the digital artifacts of the style transfer process. The 8.1 electronics mediate this through a dual-role feedback loop. They function first as a synthesized “externalized memory” of the source spectra and secondly as a live inferencing engine that generates “retrospective hypotheses” by attempting to recover source-states from the acoustic performance. This architecture stages a recursive friction between the explicitly presented digital archive and the machine’s error-prone attempt to reconstruct it through physical sound. \nAbout the composers\nHamburg-based composer Aigerim Seilova integrates acoustics\, electronics\, and interactive media. A doctoral researcher at HfMT Hamburg\, her works are performed by Ensemble Modern and the Norwegian Radio Orchestra at festivals like Tanglewood and Chelsea Music Festival. Awards include the Hindemith Prize\, Leonard Bernstein Fellowship\, and Radio France Prize. She serves as Deputy Chair of the DKV Hamburg\, promoting contemporary music and interdisciplinary exchange. \nBorn in Gießen in 1987\, Steffen Lohrey studied Digital Media with a focus on sound in Darmstadt and Multimedia Composition at the Hamburg University of Music and Drama (HfMT Hamburg). His work exists at the intersection of composition\, installation\, and code. He has been involved in a wide range of projects\, including Picadero with the Haa Collective (presented at venues such as Deltebre Dansa and the Fusion Festival)\, Crawlers with Alexander Schubert (ZKM Karlsruhe)\, and Shibboleth by Aigerim Seilova at HfMT Hamburg. His work and collaborations have been featured at Blurred Edges\, the Teatre Principal Terrassa\, and the GREC Festival\, among others. In addition\, Steffen Lohrey works as an audio engineer and sound designer in Hamburg. \n  \nClarence Barlow: Im Januar am Nil (1984)\nfor 2 soprano saxophones (1st+clarinet\, bass clarinet)\, 4 violins\, 2 celli\, double bass\, piano\, percussion  \nIm Januar am Nil was written in 1981 for Ensemble Köln – the instrumentation: two soprano saxophones\, percussion (five Japanese temple bells\, a Korean gong\, a crotale\, a cymbal\, a side drum and a bass drum)\, a piano\, four violins\, two cellos and a double-bass. In 1984 the completely revised piece was premiered in Paris by Ensemble Itineraire.\nThrough the piece runs a constantly repeated melody\, increasing both in length and density – new tones appear in the expanding gaps\, first in a purely auxiliary function\, but gradually harmonically rivalling the older tones. A single note at the start develops into a flowing melody moving from transparent tonality through multitonality to a dense self-destructive atonality.\nAt first the melody is played almost inaudibly by the bass clarinet\, amplified by overtones heard as natural harmonics in the strings: the resultant timbre is phonetic\, based on a Fourier analysis of German sentences (as for instance the title itself) containing only harmonic spectra\, namely liquids\, nasals and semi-vowels. Ideally these “scored Fourier-synthesized” words should be comprehensible\, but an ensemble of seven strings can only be approximative. After a few minutes of bass clarinet and strings\, the piano enters in an explicit rendition of the melody\, developing it as described above and timbrally coloured by “hocketing” soprano saxophones. The double bass now also explicitly plays the melody without further developing it – in a “frozen” state it is contrasted with the piano part and slows down during further repetitions due to its increasing length. \nAbout the composer\nClarence Barlow (1945–2023) was a composer and pioneer of computer music\, born into the English-speaking minority of Calcutta (now Kolkata)\, India. He received his early education there\, studying piano\, music theory\, and natural sciences\, and began composing at the age of twelve. After graduating in science from the University of Calcutta in 1965\, he worked as a conductor and teacher of music theory at the Calcutta School of Music.\nIn 1968\, Barlow moved to Cologne\, where he studied composition and electronic music at the Hochschule für Musik\, alongside studies at the Institute of Sonology in Utrecht. During this period\, he began using computers as a compositional tool\, becoming one of the early figures to explore algorithmic and computer-assisted composition.\nFrom the 1980s onward\, Barlow played a central role in shaping the field of computer music. He was closely associated with the Darmstadt Summer Courses\, where he directed computer music activities for over a decade\, and was a co-founder of GIMIK (Initiative Musik und Informatik Köln). He also held numerous academic positions across Europe\, including at the Royal Conservatory in The Hague\, where he served as Professor of Composition and Sonology and later as Artistic Director of the Institute of Sonology.\nFrom 2006 until his retirement\, Barlow was Corwin Professor of Composition at the University of California\, Santa Barbara. His work is characterized by a unique synthesis of mathematical rigor\, cultural hybridity\, and innovative approaches to musical structure\, making him one of the most distinctive voices in contemporary music. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/opening-concert/
LOCATION:Elphilharmonie Hamburg\, Recital Hall\, Platz der Deutschen Einheit\, Hamburg\, 20457\, Germany
CATEGORIES:10-05,Concert,Music,Special Event
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260510T220000
DTEND;TZID=Europe/Amsterdam:20260510T233000
DTSTAMP:20260429T121824
CREATED:20260421T125335Z
LAST-MODIFIED:20260423T165418Z
UID:10000071-1778450400-1778455800@icmc2026.ligeti-zentrum.de
SUMMARY:Reception
DESCRIPTION:Following the opening concert\, concert attendees registered to the ICMC HAMBURG 2026 are warmly invited to a reception in the Recital Hall Foyer for an opportunity to meet fellow artists\, researchers\, and conference participants in an inspiring setting overlooking Hamburg’s harbor.   \n 
URL:http://icmc2026.ligeti-zentrum.de/event/reception/
LOCATION:Elbphilharmonie Hamburg\, Recital Hall Foyer\, Platz der Deutschen Einheit\, Hamburg\, 20457\, Germany
CATEGORIES:10-05,Special Event
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260511T090000
DTEND;TZID=Europe/Amsterdam:20260511T103000
DTSTAMP:20260429T121824
CREATED:20260422T142107Z
LAST-MODIFIED:20260427T154807Z
UID:10000221-1778490000-1778495400@icmc2026.ligeti-zentrum.de
SUMMARY:Paper Session 1a: History of Computer Music
DESCRIPTION:Three papers will be presented and discussed: \n  \nHyunmook Lim: “The History of Japanese Electroacoustic Music for Piano from the Perspective of Media Genealogy”\nThis paper examines the history of compositions for piano and electronics in Japan through the lens of media genealogy. While the development of modern Japanese electronic music emerged nearly in parallel with its European counterparts\, it has often been perceived as lacking a distinctive trend or unified stylistic coherence\, unlike the established traditions of France’s Musique concrète or Germany’s Elektronische Musik. To address this\, the author categorizes the historically inconsistent\ntrajectory of Japanese electronic music by focusing on works for piano and electronics\, tracing the genealogy of specific media that have emerged within the Japanese context. In response to the ICMC2026 theme\, “Innovation\, Translation\, Participation\,” this study provides a detailed analysis of technological innovation through media genealogies\, offers a new translation of this historical narrative\, and explores the processes of artistic participation that have shaped Japan’s electronic music history. \nPaulo C. Chagas: “Beyond Execution: Unrealizability and the Ontology of Sound in Computer Music”\nThis paper proposes an ontological reorientation of computer music grounded in the concept of unrealizability. Drawing on Giorgio Agamben’s notion of potentiality without act\, it argues that dominant paradigms of electroacoustic and computer music have historically privileged realization\, execution\, and operability as the primary conditions of sonic being. From early studio practices at the GRM and WDR to the consolidation of computer music as an executable\, code-based discipline\, sound has largely been understood as something that exists in order to be realized. Against this background\, the paper proposes to examine a series of practices that destabilize the primacy of execution. Practices such as granular synthesis\, live electronic and interactive systems\, and machine-learning-based processes foreground forms of sonic potentiality that cannot be fully individuated\, predicted\, or exhausted by realization\, thereby suggesting unrealizability not as a limitation but as\na constitutive dimension of contemporary computer music. By framing sound as a field of suspended potential rather than a command to be executed\, the paper advances an alternative ontology in which listening becomes a mode of use rather than consumption. This perspective invites a reconsideration of compositional agency\, technological apparatuses\, and the political implications of sound practices beyond execution\, emphasizing openness\, contingency\, and inoperativity as critical resources for computer music today.\nAndrea Agostini: “Computer-Aided Composition: A Retrospective and Prospective Outlook”\nComputer-aided composition was established as an autonomous discipline\, distinct from the seemingly more general concept of computer composition\, in the 1980s. Since then\, it has prompted the development of dedicated software tools and specific compositional practices and attitudes. In spite of this\, a definition of what computer-aided composition actually is and\, subsequently\, a retrospective outlook on its past evolution and a prospective one on its possible futures has seldom if ever been attempted. Also\, while development and adoption of new tools has been uninterrupted through the decades\, theoretical reflection was especially thriving until the late 1990s or early 2000s\, and has lost vitality since. In this article\, we shall examine past literature in order to trace a historical overview of the term\, implicitly outlining a tentative definition of it and following through the most significant developments of computer-aided composition and its associated toolsets; attempt a necessarily partial overview of how it is practically understood and adopted today; and sketch a personal and incomplete wishlist of what the term could come to mean in some desirable future. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/paper-session-1-history-of-computer-music/
LOCATION:Hamburg University of Technology\, Building H\, Audimax 1\, Am Schwarzenberg-Campus 5\, Hamburg\, 21073\, Germany
CATEGORIES:11-05,Paper Session,Session
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260511T110000
DTEND;TZID=Europe/Amsterdam:20260511T123000
DTSTAMP:20260429T121824
CREATED:20260415T131036Z
LAST-MODIFIED:20260427T154901Z
UID:10000128-1778497200-1778502600@icmc2026.ligeti-zentrum.de
SUMMARY:Paper Session 2a: Music Information Retrieval
DESCRIPTION:Three papers will be presented and discussed:\n  \nAxel Berndt\, Aida Amiryan-Stein\, Manuel Peters\, Meinard Müller and Stefan Balke\, “ChoraleWind: An Expressive Wind-Quartet Dataset for End-to-End Rendering from the Neues Thüringer Choralbuch”\nWe introduce ChoraleWind\, a dataset along with a framework for a reproducible end-to-end rendering from the Neues Thüringer Choralbuch (NTCB). The dataset comprises 311 four-part chorales and covers the full pipeline from symbolic score encoding to performance-level rendition and synthesized audio. ChoraleWind includes a rule-based performance model that generates expressive timing\, dynamics\, and articulation\, including metric and structural accents as well as phrase-end gestures from high-quality MEI encoding of the NTCB chorales\, combined with a wind-instrument synthesis based on physical modeling that produces isolated stems and ensemble mixes. The dataset provides aligned symbolic representations\, performance annotations\, and multitrack audio\, enabling systematic training and evaluation of score-to-audio wind-quartet rendering methods under fully controlled conditions. Rather than aiming at state-of-the-art purely data-driven synthesis\, ChoraleWind is designed as a transparent and reproducible testbed for studying expressive performance generation\, timbre modeling\, and evaluation of wind-quartet rendering systems.\n\nMário Pereira\, António Sá Pinto\, Treasa Harkin and Gilberto Bernardes\, “Computational Analysis of Expressive Tempo in Irish Traditional Dance Music”\n\nThis paper presents a computational study of expressive tempo in Irish traditional dance music\, analysing 136 annotated performances of reels and jigs. Using beat-level tempo calculation\, predominant-tempo estimation\, and deviation-curve analysis\, we examine how timing varies across tune types\, performance settings\, and musical structure. Results show that expressive deviations are generally subtle: reels display a mild deceleration tendency\, jigs remain highly tempo-stable\,\nand solo–ensemble and instrument-specific differences are minimal. Phrase-level clustering reveals three characteristic deviation profiles\, with strong acceleration occurring only in opening phrases\, reflecting common slow-start performance practices. These findings provide\, to the best of our knowledge\, the first systematic quantitative characterisation of expressive timing in this tradition and highlight how micro-variations emerge from stylistic\, technical\, and interpretive factors while maintaining overall temporal stability.\nGilberto Bernardes\, Nádia Moura and António Sá Pinto\, “Perpetual Dialogues: A Computational Analysis of Voice–Guitar Interaction in Carlos Paredes’s Discography”\nComputational musicology enables systematic analysis of performative and structural traits in recorded music\, yet existing approaches remain largely tailored to notated\, score-based repertoires. This study advances a methodology for analyzing voice–guitar interaction in Carlos Paredes’s vocal collaborations—an oral-tradition context where compositional and performative layers co-emerge.\nUsing source-separated stems\, physics-informed harmonic modeling\, and beat-level audio descriptors\, we examine melodic\, harmonic\, and rhythmic relationships across eight recordings with four singers. Our commonality–diversity framework\, combining multi-scale correlation analysis with residual-based detection of structural deviations\, reveals that expressive coordination is predominantly piece-specific rather than corpus-wide. Diversity events systematically align with formal boundaries and textural shifts\, demonstrating that the proposed approach can identify musically salient reorganizations with minimal human annotation. The framework further offers a generalizable computational strategy for repertoires without notated blueprints\, extending Music Performance Analysis into oral-tradition and improvisation-inflected practices. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/paper-session-2-music-information-retrieval/
LOCATION:Hamburg University of Technology\, Building H\, Ditze Hörsaal (H 0.16)\, Am Schwarzenberg-Campus 5\, Hamburg\, 21073\, Germany
CATEGORIES:11-05,Paper Session,Session
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260511T110000
DTEND;TZID=Europe/Amsterdam:20260511T123000
DTSTAMP:20260429T121824
CREATED:20260422T142327Z
LAST-MODIFIED:20260427T155104Z
UID:10000222-1778497200-1778502600@icmc2026.ligeti-zentrum.de
SUMMARY:Paper Session 2b: AI & Music
DESCRIPTION:Three papers will be presented and discussed: \n  \nHiroshi Yamato\, OrbitScore: “A Domain-Specific Language for Polymetric Live Coding Based on Multilayered Temporal Structures”\nThis paper presents OrbitScore\, a domain-specific language (DSL) for live coding polymetric rhythm patterns based on the theory of Multilayered Temporal Structures (MLTS). While existing live coding languages such as TidalCycles and Sonic Pi provide rich pattern manipulation capabilities including polyrhythmic support\, OrbitScore offers an intuitive syntax where the beat(n by m) notation directly\nmaps to the theoretical 4:(n/4) framework\, enabling each sequence to maintain its own meter and allowing performers to create intricate polyrhythmic textures in real-time. The system integrates with SuperCollider for low-latency audio synthesis and provides a declarative\, method-chaining syntax designed for live performance. We describe the theoretical foundation\, DSL design\, implementation architecture\, and demonstrate the system’s capabilities through a live coding performance. Our contribution lies in bridging the gap between the theoretical framework of Multilayered Temporal Structures and practical live coding tools\, making polymetric expressions accessible to performers. \nYuan Zhang and Xinran Zhang\, “Hexagram-Based Semantic Composition: Discretizing Embedding Spaces into Symbolic Compositional States for Improvised Performance”\nDiffusion-based text-to-audio (TTA) systems such as Udio have introduced a mode of musical making in which linguistic prompts activate high-dimensional latent manifolds to yield contingent\, non-repeatable sonic artefacts. This generative architecture—operating through interse-\nmiotic translation between linguistic signs and high-dimensional latent space—produces distinctive aesthetic conditions that have yet to be adequately theorized. This paper introduces latent music as an emergent aesthetic form produced through generative text-to-audio systems such as Udio. Latent music arises from processes of interpolation\, recombination\, and associative drift within\nhigh-dimensional latent spaces—existing in states of perpetual becoming characterized by gradient identities\, interreferential drift\, asignifying ruptures\, and ontological indeterminacy. These emergent sonic forms occupy interstitial spaces between recognizable musical signs\, resisting categorical stability while revealing distinctive possibilities for sonic expression. The result is a field of sonic objects marked by spectrality\, liminality\, and cross-material entanglement—sounds that hover between genres\, gestures\, and perceptual thresholds. Drawing on Deleuzian aesthetics\, philosophy\, and an extensive corpus of prompt-generated sonic artifacts\, the paper situates these emergent forms as products of asignifying rupture and aesthetic drift\, where sonic identities dissolve and recombine in unstable assemblage determined by intersemiotic translation between linguistic prompts and audio materiality. This research offers a theoretical framework and critical vocabulary for engaging with these uncanny sonic entities\, proposing that latent music invites listening practices attuned to indeterminacy\, associative resonance\, and the productive tensions of the not-yet-formed. \nColton Arnold\, Zhaohan Cheng and Ajay Kapur: “AI Framework for Dynamic Robotic Instrument Calibration”\nThis paper presents a data-driven calibration framework for robotic musical instruments based on a hybrid ensemble model that combines K-nearest neighbors (KNN) and a multi-layer perceptron (MLP). KNN anchors predictions to recorded acoustic measurements\, while the MLP enables nonlinear generalization and smooth interpolation across the instrument’s playable range. A distance-dependent blending strategy integrates the two models\, improving consistency across sparse and dense data. The proposed approach produces stable and repeatable calibration estimates for both pitched and non-pitched instruments\, outperforming standalone models across a range of sampling conditions. This work establishes a scalable foundation for automated calibration in robotic musical systems. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/paper-session-2b-ai-music/
LOCATION:Hamburg University of Technology\, Building H\, Ditze Hörsaal (H 0.16)\, Am Schwarzenberg-Campus 5\, Hamburg\, 21073\, Germany
CATEGORIES:11-05,Paper Session,Session
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260511T110000
DTEND;TZID=Europe/Amsterdam:20260511T173000
DTSTAMP:20260429T121824
CREATED:20260421T181209Z
LAST-MODIFIED:20260429T084941Z
UID:10000184-1778497200-1778520600@icmc2026.ligeti-zentrum.de
SUMMARY:Listening Room 2
DESCRIPTION:Fixed Media: Program Overview\n430-+\nAyako Sato \nFMVP!\nGuanjun Qin \nLunar Current\nChufan Zhang\, Jun Wang and Qi Liu \nSawa\nAkiko Hatakeyama \nTake Me Back to Indonesia\nBoyi Bai \nVentward\nEd Osborn \nWoody\nAdrian Kleinlosen \nZen to Hearth\nYu Linke \n  \nAbout the pieces & artists\nAyako Sato: 430-+\nThe fundamental pitch of the 15th bamboo tube of the Shō\, “kotsu\,” corresponds to the current standard pitch of 430Hz in Gagaku. This acousmatic piece involves listening to 430Hz\, its harmonics\, the sounds that deviate from it\, and unreliable text about the Shō generated by AI. Perhaps. \nSho performance: DEGUCHI Miki \nAbout the artist\nAyako Sato is a composer\, musician\, artist\, and researcher working mainly in the field of electroacoustic music. Her works have been presented at international conferences and festivals (ICMC\, SMC\, NYCEMF\, ISMIR\, WOCMAT\, etc.) and won awards in international competitions (Prix Presque Rien\, Destellos Competition\, International UPISketch Competition\, etc.). She received her Ph.D. from Tokyo University of the Arts in 2019 for her research on Luc Ferrari’s works. After working as a part-time lecturer at Tamagawa University\, Osaka University of Arts\, Tokyo Denki University\, and Shobi Music College\, she is a lecturer at Shizuoka University of Art and Culture starting April 2025. \n  \nGuanjun Qin: FMVP!\nFMVP! is an electroacoustic composition built entirely from the sampled sounds of basketball — the bounce\, the squeak of shoes\, the swish of the net\, and the roar of the crowd. Through sound transformation and spatial movement\, the piece narrates the emotional journey of an athlete: from doubt and criticism to determination\, and finally to victory. Dedicated to basketball legend Stephen Curry\, FMVP captures the rhythm\, intensity\, and inner monologue of a player striving to redefine limits. Each percussive impact becomes a heartbeat; each layered resonance a moment of resilience. The composition explores how athletic struggle and artistic creation share the same pulse — persistence\, precision\, and belief. \nAbout the artist\nChampion (Guanjun) Qin is an award-winning composer\, producer\, and topliner\, currently pursuing a PhD in Music Composition at the University of Bristol\, fully funded by the China Scholarship Council (CSC). His works have been performed\, awarded\, or officially selected at major international music and sound art festivals\, including the Denny Awards (USA & China)\, YoungLione*ss Festival (Italy)\, Futura Festival (France)\, and the International Computer Music Conference (ICMC). Champion’s creative practice bridges electroacoustic composition and popular music production\, exploring the intersection of sound design\, cross-cultural aesthetics\, and narrative expression. He has collaborated with and composed music for renowned artists such as Jackson Wang\, a member of GOT7\, one of Asia’s most influential K-pop groups. His production work also extends to film and television\, including the acclaimed animated series GG BOND\, which drew over 50 million viewers in its first week of broadcast. \n  \nChufan Zhang\, Jun Wang and Qi Liu: Lunar Current\n“The ripples of moonlight surge and finally settle into stillness in the current. The trembling of electronic waves all find their peaceful end in the moonlit night.” – The pulses of electronic sound eventually merge into the gentle waves of moonlight\, just as the surges of electric current fade into the breath of the night. This work takes electronic waveforms simulating electric current as its core sound material. Through modulation and filtering processing in a digital audio workstation\, it employs techniques such as synthesizer wave shaping\, ambient reverb stacking\, and low-frequency oscillation to create auditory characteristics that blend the texture of electric current with the haziness of a moonlit night. Lunar Current is an immersive auditory experience. It attempts to capture not the moonlight itself\, but the sensory critical state where the quiet night and electronic current intertwine. At this moment\, the technological rhythms of electronic sound and the ethereal silence of the moonlit night together construct a gentle echo of a whispered conversation with the starry night. \nAbout the artists\nChufan Zhang (born in July 2006) is a sophomore at the Communication University of Zhejiang\, and also a young creator who delves into the fields of creative design and blockchain applications. Her representative works include Xuan and Mo Zang. Among them\, Xuan won the second prize in the East China Division of the National University Students Blockchain Competition\, and Mo Zang was awarded the third prize in the Future Designer Competition. During her studies at the university\, she not only won the first-class scholarship of the university but also was awarded the titles of “Merit Student” and “Outstanding Social Worker”\, demonstrating solid professional skills and cutting-edge innovative thinking in both academic research and competition practice. \nJun Wang  \nQi Liu \n  \nAkiko Hatakeyama: Sawa\nIt’s neither close nor far\, neither happened nor never happened. This is a short piano-and-electronics piece that captures a moment in an unfamiliar place. \nAbout the artist\nAkiko Hatakeyama is a composer\, performer\, and artist of electroacoustic music and intermedia. Akiko’s research focuses on realizing her ideas of relations between the body and mind into intermedia works\, often in conjunction with building customized instruments/interfaces. It is a form of nonverbal communication with her inner self and with the environment\, including the audience. Expression through sounds and performance brings her therapeutic effects\, helping her process memories and trauma. Her work has been presented internationally at various venues and festivals in the U.S.A.\, Canada\, Chile\, England\, Ireland\, Portugal\, New Zealand\, China\, South Korea\, and Japan. Selected awards include the Best Performance Award at the NIME International Conference\, the winner of the Audio-Visual Composition at the ICMA Showcase: Asia\, the George A. and Eliza Gardner Howard Foundation Fellowship\, and the MacDowell Fellowship. Akiko obtained her B.A. in music from Mills College and M.A. in Experimental Music/Composition at Wesleyan University and completed her Ph.D. in the MEME program at Brown University. Her mentors include Alvin Lucier\, Anthony Braxton\, Ronald Kuivila\, Maggi Payne\, Chris Brown\, John Bischoff\, James Fei\, and Butch Rovan. She is currently an associate professor of Music Technology at the University of Oregon. \n  \nBoyi Bai: Take Me Back to Indonesia\nThis work is rooted in a field recording made in Madobag Village\, Mentawai Islands\, Indonesia\, capturing children playing near an old well. As a sonic memory\, it inspired the composer to reflect on the contrast between fleeting moments of travel serenity and the pressures of everyday life. The work explores the tension between two acoustic worlds. It opens with the calm of the island\, employing gentle drones and textures to construct a dreamlike space between the external environment and internal memory\, reimagining how memories emerge in times of longing. Sharp phone alarms and daily noises then shatter this tranquil soundscape\, marking the collapse of the imagined realm. In the end\, the work maintains an open\, unresolved narrative tension\, oscillating between memory and the present. \nAbout the artist\nBoyi Bai is a composer and sound artist specialising in field recording\, soundscape composition and interactive VR spatial audio\, whose practice-led works transform environmental sound into immersive auditory spaces while exploring the intrinsic relationships between place\, memory and media. His works have been widely presented at internationally acclaimed festivals\, art exhibitions\, and radio programmes\, including BBC Radio 6\, TagTEAMS 2026\, MA/IN Festival\, SOUND/IMAGE Festival\, MANTRA\, PAYSAGES | COMPOSÉS Festival\, and the San Francisco Tape Music Festival\, building an extensive exhibition profile in the global fields of sound art and electroacoustic music. His distinctive artistic approach has been recognised with the Gold Award in the Electronic Acousmatic Music category at the 6th Denny Awards Electronic Music Competition\, a shortlist for the Sound of the Year Awards 2024\, and other internationally recognised professional honours. \n  \nEd Osborn: Ventward\nVentward is built from recordings of several performances using tabletop guitar and electronics which were edited into a single work. It explores a series of sound states to produce a shifting and evolving cluster of sound\, one that gradually expands its tonality and frequency range. As it does so it focuses on distilling the acoustic field down to its core textures of processed and re-processed sounds. The piece also explores a structural space that exists between live improvisation and studio composition. \nAbout the artist\nEd Osborn (1964) works with many forms of electronic media including installation\, video\, sound\, and performance. He has presented his work at the San Francisco Museum of Modern Art\, the singuhr-hörgalerie (Berlin)\, the Berkeley Art Museum\, Artspace (Sydney)\, the Institute of Modern Art (Brisbane)\, the ZKM Center for Art and Media (Karlsruhe)\, Kiasma (Helsinki)\, MassMOCA (North Adams)\, the Yale University Art Gallery\, and the Sonic Arts Research Centre (Belfast). Osborn has received grants from the Guggenheim Foundation\, the Creative Work Fund\, and Arts International and been awarded residencies from the DAAD Artists-in-Berlin Program\, the Banff Centre for the Arts\, Elektronmusikstudion (Stockholm)\, STEIM (Amsterdam)\, and EMPAC (Troy\, NY). He is Professor of Visual Art and Music at Brown University. \n  \nAdrian Kleinlosen: Woody\nSound synthesis and spatialization generated with Csound\, voices with espeak-ng\, mixed in Pro Tools. Text based on a dialogue from a famous movie. \nAbout the artist\nAdrian Kleinlosen is a composer working with instrumental\, vocal\, and electronic music. His work focuses on structure\, rhythm\, and form\, often based on the superposition of independent musical layers and processes rather than linear development. Questions of temporal organization and formal articulation play a central role in both his acoustic and electronic works. In his electronic music\, Kleinlosen composes algorithmically\, using a range of software environments and programming languages. Computational tools are integral to his compositional thinking and are used to design musical structure\, temporal processes\, and formal relationships across different media. Kleinlosen holds degrees in composition and musicology and received a doctorate (Dr. phil.) for research on musical structure and form in contemporary music. In addition to his compositional work\, he has been active as an educator and lecturer in composition\, music theory\, and artistic research. \n  \nYu Linke: Zen to Hearth\nThis piece uses temple bells as the core sampling material\, with the theme of creating an auditory journey from spiritual seclusion to facing reality. “Zen” represents spiritual seclusion\, while “Hearth” represents the mundane hustle and bustle of the world. The original intention is to escape from reality and construct an ideal world. At the beginning\, the clear bell ringing\, accompanied by minimalist electronic tones\, unfolds\, depicting a secluded ideal world of Zen\, where the creator briefly withdraws from the chaos of the mundane world and escapes. As the melody progresses\, the echoes of the bells gradually weaken\, and concrete electronic rhythms and low-frequency textures gradually enter\, symbolizing that the ideal Zen space is gradually penetrated by the reality of the world. The two sound elements interweave in the music to express the mutual integration and non-contradiction of ideals and reality. As the piece approaches its end\, the bells serve as the background\, blending with the rhythmic movement of the realistic clock\, expressing that the chaotic time elements in reality struggle within the atmosphere of the ideal world\, disorder eventually returns to calmness in the temple bells\, highlighting the transformation from “Zen” (spiritual seclusion) to “Hearth” (mundane hustle and bustle) – escape is not the ultimate answer; the reconciliation of ideals and reality is the focus of this auditory narrative. \nAbout the artist\nYu Linke\, born in August 2004\, is currently a third-year undergraduate student majoring in Music Sound Direction in the Composition Department of Wuhan Conservatory of Music. In 2023\, she was admitted to the university with the top score in her major\, focusing on academic practice in composition creation and sound engineering. During her time at school\, her research and practical achievements have covered professional composition competitions and interdisciplinary technology contests. She has successively won the school-level composition award\, the first-class scholarship\, and two second prizes in provincial competitions\, demonstrating solid academic accumulation and outstanding innovative practical ability in the intersection of composition art and sound technology. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/listening-room-2-1/
LOCATION:Hamburg University of Technology\, Building A (A 0.14)\, Am Schwarzenberg-Campus 1\, Hamburg\, 21073\, Germany
CATEGORIES:11-05,Listening Room,Music
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260511T110000
DTEND;TZID=Europe/Amsterdam:20260511T173000
DTSTAMP:20260429T121824
CREATED:20260421T183941Z
LAST-MODIFIED:20260428T102551Z
UID:10000183-1778497200-1778520600@icmc2026.ligeti-zentrum.de
SUMMARY:Listening Room 1
DESCRIPTION:Fixed Media | Program Overview\nPlight of the Monarch\nSalvatore Siriano \nAxis of Frost\nLiuyang Tan \nscanning\nKeisuke Yagisawa \nVeil-Audiovisual performance with real-time motion detection by Media Pipe\nYiting Shao \nEbow Supernova\nCristiano Riccardi \nInterwoven Realms: The Threefold Domain of Consciousness\nQing Ye and Yuxue Zhou \nOkinawa Blue Note\, Recalled\nYerim Han \nQuantum Sphere & Sound Sympathy — Composed for Guzheng and Quantum Computing\nWeijia Yang \nThe Orphic Shimmer onto the 192 Steps\nWanjun Yang \nTranscendence: Performance without Presence\nJinwoong Kim \nTriangulation\nTalia Amar \nWhispers That Are Heard\nJingfan Guo \nLabyrinthe Souriant (Smiling Labyrinth)\nShih-Lin Hung and Ju An Hsieh \nEchoes of the dial\nYunpeng Li \nAbout the pieces & artists\nSalvatore Siriano: Plight of the Monarch\nMonarch butterfly populations face ongoing and compounding threats driven by habitat loss\, pesticide exposure\, invasive plant species\, and continued encroachment on open land where milkweed once thrived. Since the mid-1990s\, eastern migratory monarch numbers have fallen to a fraction of their historical peaks; although recent seasons have shown modest recovery\, populations remain far below long-term averages. \nWithin this context\, the work traces key stages of the monarch lifecycle\, including overwintering in Mexico\, migration\, mating\, and reproduction\, using scientific data from the Monarch Joint Venture and the U.S. Geological Survey translated into sonic parameters through additive and FM synthesis. Long-term population trends shape the evolving texture\, dynamics\, and rhythmic behavior of the sound\, allowing ecological data to inform the temporal and spectral structure of the audio. \nTranslation also operates across media. Original filmed footage from the Fox River Valley in Illinois\, a recurring migratory and breeding landscape for eastern monarch populations\, is transformed through point-cloud and depth-camera processes. Human presence and natural environments are rendered as shifting\, particle-based forms whose fragmentation mirrors the precarity of monarch habitats\, situating ecological data within a perceptual and embodied frame rather than a purely representational one. \nThe work concludes with documentation of a community-based public artwork that distributes milkweed seeds to local residents. While the piece does not involve direct audience interaction\, this closing gesture reframes participation as shared responsibility. Rather than positioning environmental change solely at the level of policy\, the work emphasizes individual and community-scale actions\, such as reducing pesticide use\, planting milkweed and other native species\, and allowing greater biodiversity within managed landscapes\, as tangible responses to ongoing habitat loss. Because eastern North American monarch butterflies lay their eggs exclusively on milkweed\, these localized decisions directly shape their capacity to survive and reproduce. \nAbout the artist\nSalvatore Siriano is a Chicago-based composer\, audiovisual artist\, and educator whose work explores the relationship between sound\, image\, and the natural environment through digital media. His recent works have been presented at Sound/Image Festival (UK)\, SICBM (Brazil)\, Seoul International Computer Music Festival\, Art Alive Festival (Portugal)\, WOCMAT (Taiwan)\, NOIS//E (Italy)\, as well as ICMC\, NYCEMF\, and SEAMUS. He is full-time music faculty at Triton College. \n  \nLiuyang Tan: Axis of Frost\nAxis of Frost is the fourth movement of the electronic music suite Four Seasons Soundscapes. Drawing inspiration from the microscopic dynamics of ice and snow\, the composer employs wind chimes\, gears\, and metallic collisions as primary sound materials. Through the interweaving of pulsating rhythms and howling cadences\,the work evoke a frigid soundscape of crystallizing snowflakes\, swirling ice particles\, and surging glacial undercurrents. \nAbout the artist\nTan Liuyang\, the graduate student of the Music Engineering Department of Sichuan Conservatory of Music\, studies electronic music composition with Professor Lu Minjie. He is the member of EMAC (Electroacoustic Music Association of China). His research focuses on inter-media composition of electroacoustic music\, and his works have won prizes and been selected to present in the international musical activities\, including MUSICACOUSTICA-HANGZHOU\, ICMC (Ireland\, China\, South Korea)\, Earth Day Art Model\,China Computational Art Conference\, the MA/IN Festival in Italy\, International Electronic Music Competition (IEMC\, Shanghai)\,SEAMUS\, and New York City Electroacoustic Music Festival. \n  \nKeisuke Yagisawa: scanning \nThis video work explores the human perception of visual images. In response to art critic Clement Greenberg’s thesis about the immediacy and autonomy of painting\, philosopher Willem Flusser argues that a “scanning” process occurs when perceiving a two-dimensional work of art. This video work takes this thesis as its theme\, expressing the instantaneous phenomenon of a light bulb breaking as visual and acoustic variations. MAX and Processing were used for the video and audio processing. \nAbout the artist\nKeisuke YAGISAWA is an audiovisual artist. He studied electronic music\, video and visual art in Royal Academy of Art in the Hague(Netherlands)\, Tokyo University of the Arts(Japan) and had doctoral degree(DMA) in Kunitachi College of Music in Japan. His works have been presented at international conferences and festivals including ICMC\, NYCEMF\, SICEMF etc. Now he is working at Tamagawa University as an assistant professor for electronic music and technology art. \n  \nYiting Shao: Veil-Audiovisual performance with real-time motion detection by Media Pipe\nThis work employs real-time motion capture of the dancer to generate audiovisual elements in parallel. It is inspired by The Painted Veil by W. Somerset Maugham.\nI. Time and again\, a veil is woven around oneself\, until the original self is forgotten.\nII. The moment the veil is lifted comes only after a long and painful struggle.\nIII. Through repeated loss and searching\, one is left to wonder—beneath the veil\, is this the true self? \nAbout the artist\nYiTing Shao\, born in Hebei\, China\, in 2000\, Received a Bachelor’s degree in Violin Performance from Communications University of Zhejiang in China\, and completed a Master’s degree in Composition at Dankook University in Korea. Currently pursuing a Doctorate in Electro-acoustic and Instrumental Composition at Hanyang University.Work was presented at the 2025 International Computer Music Conference (ICMC) in Boston2025. Performer:XINRAN XU\,Liaoyang\, Liaoning Province\, China Xinran Xu is a dancer and choreographer trained in both street and contemporary dance. She graduated from Beijing Modern Music Academy and Dankook University. She won 1st Place at Hip Hop International (Beijing Regional) and received the Gold Prize in Contemporary Dance at the 6th C-DAK International Dance Competition (2025). She also competed in World of Dance\, Disco Connection\, and Danceholic.She has worked as a choreographer and performer in multiple showcase performances and appeared in the dance program “Ttechum (떼춤)”.Currently\, she is active in Korea as a member of Blue Dance Theater 2\, ISSUE Dance Crew\, and Sparky. Her work focuses on the fusion of street and contemporary dance. \n  \nCristiano Riccardi: Ebow Supernova\nThis audiovisual work proposes a phenomenological investigation of interior space through the sensible representation of a cosmic event: the unfolding of a supernova as both metaphor and device for the alteration of corporeal consciousness. This work proposes an experience of corporeal subtraction\, the progressive dissolution of the body’s boundaries\, the indifferentiation between subject and object. Through sonic rarefaction and luminous beams\, the work induces a meditative state that reconfigures the relationship between spectator and cosmic matter. This is not mere contemplation\, but rather an interpenetration with the intensities that constitute reality itself. The interior journey becomes indistinguishable from the journey through cosmic spaces: both experience the same phenomenon of rarefaction\, illumination\, and the attenuation of boundaries. On a phenomenological plane\, the supernova represents the unveiling of what is hidden—not as a remote event\, but as an intimate revelation of the luminosity that constitutes our own materiality. The listener experiences a form of dilated consciousness\, where the awareness of being part of a force greater than oneself becomes the corporeal experience of one’s own dissolution. The musical and visual rarefaction operates an ascesis from the domain of the speakable and the representable\, leaving pure intensity and openness toward the unsaid—a liminal space where the microcosm of interiority and the macrocosm of stars interpenetrate without boundaries. The composition is structured around twelve independent chromatic lines derived exclusively from samples of an ebowed guitar\, mapped into a custom-built synthesizer that preserves the instrument’s characteristic infinite sustain. Organized into four registral groups (three sopranos\, three altos\, three tenors\, three basses)\, the voices operate as parallel streams converging and diverging through close semitonal proximity\, generating dense harmonic clusters. Staggered entrances and overlapping durations create gradual transformations of harmonic density\, privileging timbral evolution over melodic narrative. The visual component translates each musical line into concentric circles responding in real time to amplitude variations\, creating a dynamic field of overlapping geometric forms that reflect sound-wave propagation and harmonic density. By foregrounding chromatic density\, sustained sonority\, and visual abstraction\, Ebow Supernova proposes an immersive experience in which individual elements dissolve into a unified perceptual field—interrogating the contemporary paradigm of corporeality and suggesting that the deepest contact with reality might paradoxically consist in the negation of the biological body: a journey toward the luminosity that traverses and transcends it. \nAbout the artist\nCristiano Riccardi is a multi-instrumentalist and sound designer with over 30 years of experience in live and studio practice. His recent work spans recording Fausto Razzi’s Memoria (2020) and Lontano (2021)\, performing Razzi’s scenic piece Protocolli (2023)\, arranging Stockhausen’s Tierkreis (2025\, awarded for interpretation)\, and contributing to an intermedial reworking of Stravinsky’s L’Histoire du Soldat. He is currently pursuing a Master’s in Electronic Music at the Conservatorio di Santa Cecilia in Rome\, focusing on electroacoustic composition and real-time performance. \n  \nQing Ye and Yuxue Zhou: Interwoven Realms: The Threefold Domain of Consciousness\n“Overlap: The Three Realms of Consciousness” is a multimedia musical work that explores the deep structures of the human psyche. The sonic dimension includes ASMR trigger sounds—such as wood\, metal\, and human oral noises—woven into an arch-shaped structure (ABCB’A’) that connects Freud’s three dimensions of the preconscious\, the unconscious\, and consciousness. Through TouchDesigner\, sound and visuals jointly construct a psychological landscape\, revealing the interlacing and transformation of multidimensional consciousness within dreams. The audience is drawn into a psychological space that transcends reality\, experiencing the flow and reflection of consciousness through the fusion of sound and form. \nAbout the artists\nQing Ye is a composer and doctoral student in Music Technology at Nanjing University of the Arts\, supervised by Professor Xuan Wang. She is a member of the Electronic Music Society of the Chinese Musicians’ Association and holds a Level-3 composer certification. Her works have been presented at international composition competitions including the Hangzhou International Electronic Music Festival and the Sibelius and Vivaldi International Music Competitions. Her practice focuses on computer-assisted composition and audiovisual creation. \nYuxue Zhou is a Ph.D. in Musicology at the Communication University of China under the supervision of Professor Xuan Wang. Her creative work focuses on electronic and multimedia music. She has received awards at major composition competitions including MUSICACOUSTICA-BEIJING\, the Hangzhou International Electronic Music Festival\, and the Vivaldi International Composition Competition. Her works have been presented in national arts projects and international multimedia music events. \n  \nYerim Han: Okinawa Blue Note\, Recalled\nThis audiovisual fixed media work is based on recollected memories following a trip to Okinawa and a subsequent viewing of the film Okinawa Blue Note. Using sound materials extracted from travel videos\, the piece explores how memory—already shaped and idealized through recollection—is further manipulated and restructured over time. Conceived as a dive into memory\, water functions as a medium that distorts and contains remembrance\, while layered and transformed sounds construct an emotional landscape of mediated recall. \nAbout the artist\nYerim Han (b. 1997\, South Korea) is a composer currently pursuing a Master’s degree in Composition at Hanyang University. Trained in contemporary acoustic music\, she is also actively engaged in MIDI-based composition\, electronic music\, and commercial music practices. Her work explores diverse musical languages across acoustic and digital media. \n  \nWeijia Yang: Quantum Sphere & Sound Sympathy — Composed for Guzheng and Quantum Computing\nThis work takes classic guzheng music as the creative foundation and relies on an independently developed quantum synthesizer interactive system to construct a cross-temporal dialogue between “classical artistic conception” and “quantum timbre”. Its submitted version is an audio-visual hybrid version developed based on the Touch Designer visual effects port\, while the live performance version can be connected to real-time live instrumental performance\, realizing a complete closed-loop performance of “gesture — quantum sound — instrument”. The guzheng melody is processed through quantum gate algorithms to be transformed into electronic sounds with the characteristics of quantum superposition state. Meanwhile\, it relies on a real-time visualization engine to generate dynamic images of quantum Bloch spheres and particle flows\, ultimately constructing an immersive audio-visual integrated experience. Inspired by High Mountains and Flowing Water of the Shandong Guzheng School\, this work inherits its skeletal structure and core backbone notes\, and innovatively reshapes the musical form through quantum timbre\, presenting a transformation path from traditional art to future media art. \nAbout the artist\nWeijia Yang. Ph.D\, Full-time Postdoctor at Shanghai Conservatory of Music. Currently\, he holds multiple academic appointments\, including Excellent Innovation and Entrepreneurship Tutor for Shandong Province’s “Internet Plus” Program\, Member of the Institute of Electrical and Electronics Engineers (IEEE)\, Member of the Chinese Association for Artificial Intelligence (CAAI)\, Member of the Electronic Music Society of the Chinese Musicians Association\, and Reviewer for 8 A-class core journals indexed by SCI/SSCI (such as PLOS ONE and Frontiers in Psychology). He has published 8 core papers indexed by SCI\, SSCI\, EI\, Scopus\, and Peking University Core (PKU Core) of China\, as well as numerous non-core journal papers\, obtained 3 Software Copyrights\, and served as Principal Investigator or Key Participant in 12 research projects at national\, provincial\, and municipal levels. He has mentored 6 national and provincial A-class innovation and entrepreneurship projects that received funding and awards. Additionally\, he has composed over 10 representative electronic music works (e.g.\, Nine-Colored Deer)\, which have been released on major music platforms; his works have won multiple awards and been performed in numerous exhibitions at international competitions both\ndomestically and internationally\, such as ICMC (International Computer Music Conference) and WOCMAT (World Conference for Chinese Composers). \n  \nWanjun Yang: The Orphic Shimmer onto the 192 Steps\n“The Orphic Shimmer onto the 192 Steps” is an interactive live-coding audio-visual performance that explores the role of art as a “harmonizing force” within the turbulent landscape of contemporary civilization. The work takes its title from the 192 steps of the Odessa Staircase\, abstracting this historically and cinematically significant site into a topological space of tension and dispersion. By invoking the myth of Orpheus – the figure who restored order through music – the piece builds a philosophical bridge between classical humanitarian ideals and modern algorithmic logic. \nTechnical Framework \nThe work is built on a sophisticated integration of live coding\, modular synthesis\, and generative visuals:\n* Audio Synthesis: Primary sound design is executed in VCV Rack\, employing a hybrid of subtractive\, wavetable\, and granular synthesis. A foundational layer of algorithmically generated Shepard Tones creates an auditory illusion of “infinite ascent\,” symbolizing the cyclical pain and progress of history.\n* Live Interaction: Sonic Pi serves as the central engine for real-time algorithmic restructuring. The performer uses MIDI controllers to manipulate the density and spatialization of the sound field\, facilitating a dialogue between rigorous code and human intuition.\n* Visual Generative Design: Developed in Processing\, the visual layer utilizes the OSC protocol for sample-level synchronization. Spectral energy and transient parameters from the audio drive fluid\, geometric “shimmers” that map onto the metaphorical 192 steps. \nAbout the artist\nWanjun YANG is an engineer\, programmer\, sound designer\, researcher and electronic music musician. Now he is an associate professor of Music Engineering Department\, Sichuan Conservatory of Music. In the past 26 years\, he lives at Chengdu City\, Sichuan Province\, Southern of China\, and taught at Sichuan Conservatory of Music. His research and creative interests lie in Acoustics and Psychoacoustics\, Sound Design\, Software Developing\, New Media Art\, Multimedia Design. Since 2011\, he attended the EMS Annual in New York\, followed by participation in an electronic music exchange at the University of Oregon in 2012; in 2017\, his work was selected for ICSC 2017 in Nagoya and his paper selected for ICMC 2017 in Shanghai; he served as Concert Reviewer for ICMC 2018 in 2018; in 2019\, his pieces were selected and performed at ICMC 2019 and NYCEMF 2019 in New York\, alongside participation in another electronic music exchange at the University of Oregon and visits to CCRMA at Stanford University and UCLA; in 2020\, his works were selected and performed at the NYCEMF 2020 Virtual Online Festival; from 2021 to 2025\, his compositions were continuously selected and performed at ICMC\, NYCEMF\, and ICSC international conferences; additionally\, he has been a long-term reviewer for ICMC\, IEMC\, and NCDA. \n  \nJinwoong Kim: Transcendence: Performance without Presence\nTranscendence is an audio-visual performance interface that reimagines the relationship between performer interaction and algorithmic autonomy. The system utilizes a gamified “turret-defense” mechanic as a metaphor for stochastic sound generation. The user places “turrets” on a grid\, which autonomously track and engage moving targets based on proximity algorithms. This interaction serves as a direct translation of spatial logic into sound: distance defines intensity\, angle determines stereo panning\, and target properties dictate pitch and timbre\, creating a real-time sonification of digital conflict. \nA core innovation of Transcendence lies in its distinct “Performance Mode.” In traditional Human-Computer Interaction (HCI) for music\, the mouse cursor serves as a constant visual anchor\, reminding the user of the computer’s presence as a tool. In this work\, the cursor is deliberately rendered invisible during performance. While the performer retains control over the grid\, the visual representation of their “hand” is removed. \nThis design choice—”Performance without Presence”—dissolves the barrier between the creator and the creation. It shifts the cognitive load from operating a UI to immersing oneself in the audio-visual feedback loop\, allowing the performer to become a “ghost in the machine.” The result is a self-generating\, yet controllable\, polyphonic soundscape where the interface disappears\, leaving only the pure translation of logic into art. \nAbout the artist\nJinwoong Kim is a South Korean composer\, musician\, and media artist. He received his Ph.D. in Intermedia Arts from Tokyo University of the Arts\, where he studied under Professor Kiyoshi Furukawa. His creative practice spans a wide range of fields\, from contemporary computer music to\ninteractive media installations\, with a focus on integrating compositional methodologies with emerging technologies and cross-disciplinary thought. Drawing upon a diverse background in music\, visual art\, engineering\, and the natural sciences\, he has developed custom software systems–including BODIC and KCAC—to explore new forms of audiovisual expression.\nHe is currently a full-time faculty member in the Digital Media Design major within the Global Elite Division at Yonsei University\, where he teaches courses on creative coding\, computational design\, and media-based artistic practices. \n  \nTalia Amar: Triangulation\n“Triangulation” uses three different electronic music techniques that serve the same goal: to expand the possibilities of the acoustic piano. Each of these three techniques explore a different aspect of human-computer interaction. The pianist controls the electronics from an iPad\, choosing when to switch between the three patches\, and the pianist’s relationship with the computer changes in each patch. In the first patch the computer “listens” to the piano and reacts to it by performing the same notes with modifications such as quarter tone modulations\, reversing\, and stretching. The electronics in the second patch is pre-recorded and multiplies the piano\, with the effect that it sounds as if there were many pianos performing the same time. In the third patch the electronics records the piano performance and plays it back with different effects\, building up an aleatoric wall of pianos that is not possible to perform acoustically. \nAbout the artist\nDr. Talia Amar is the recipient of many international  awards including the Prime Minister prestigious award 2018\, The Acum prize for “best piece of the year” 2022\, The Acum award 2019\, the Rosenblum Prize for Promising Young Artist 2016 by the Tel Aviv Municipality\, the Klon Award for young composers granted by the Israeli Composers League. Recently she was the winner of The Next Voice – a call for scores from Israeli composers. Her piece\, For Orchestra I was unanimously selected from an incredible 152 submissions and will be performed by the Israel Philharmonic under the baton of Lahav Shani in March 2026 in Tel Aviv\, Haifa\, and Jerusalem. She was selected by the famous violinist Renaud Capucon to participate in the Festival New Horizons d’Aix en Provence 2022 where her piece\, commissioned especially for the festival\, will be performed. In 2022 her piece “Labyrinth” was commissioned and performed at Festival Présences by Radio France in Paris. She was selected to represent Israel in different festivals such as ISCM World New Music in Vancouver\, ECCO Festival in Brussels\, Asian Composers League Festival in Taiwan\, ICMC in Seoul\, and SMC in Austria.\nSince 2017\, Talia joined the composition faculty at the Jerusalem Academy of Music and Dance in Israel where she is also the Head of Technology and Innovation. She is also a council member of the Israeli Composers League and the performer of electronics music of Meitar Ensemble. \n  \nJingfan Guo: Whispers That Are Heard\nComposed for Arduino and Max/MSP\, this work employs a multi-sensor interface as its primary vehicle. It centers on two core sonic materials: whispering voices and African percussion. The former signifies the individual and the secret\, while the latter points to the collective and driving force. The work aims to superimpose these elements within a single sound field\, erasing the boundary between the individual and the collective. Amidst sonic entanglement and compression\, intimate whispers are deprived of their original space of existence\, alienated into mere components of the rhythm. This is\, at once\, an act of listening to secrets and a scrutiny of the clamor. \nAbout the artist\nJingfan Guo\, a native of Tai’an\, Shandong Province\, China\, is a member of the Electronic Music Society of the Chinese Musicians Association(EMAC) and a postgraduate student in Computer Composition at Wuhan Conservatory of Music\, class of 2024\, under the guidance of Professor Li Pengyun. His main research interests include electroacoustic music\, sensor interaction\, and kyma sound design. His major works include “Mute Water” (Electroacoustic Music)\, “Liminal Space” (Mixed Music)\, “Dissoving Voice” (for kyma and Computer)\,“Whispers That Are Heard”(for Arduino and Sensors). \n  \nShih-Lin Hung and Ju An Hsieh: Labyrinthe Souriant (Smiling Labyrinth)\n“Labyrinthe Souriant” (Smiling Labyrinth) is an interdisciplinary electroacoustic work exploring the fluid boundary between visual art and sonic translation. The piece is based on a hand-drawn graphic score created by a visual artist\, who utilizes traditional staff paper as a canvas for organic\, labyrinthine line-work and anthropomorphic silhouettes. The composition utilizes a performance-led approach to sound design. Using the graphic score as a primary visual stimulus\, the composer engaged in a one-take improvisation session via MIDI controllers mapped to a customized Ableton Live environment. This method ensures that the temporal flow of the music maintains a direct\, visceral connection to the visual trajectories of the score. The vocal samples were processed through real-time DSP chains\, where the nuances of the performance (velocity\, pressure\, and timing) were\ntranslated into dynamic spectral shifts and spatial movement\, reflecting the ‘Smiling Labyrinth’s’ intricate and unpredictable nature. \nAbout the artists\nShih-Lin Hung holds a B.A. from the National University of Tainan and an M.A. from National Yang Ming Chiao Tung University. Initially trained in Western classical composition\, his recent work explores electroacoustic aesthetics within the lineage of French musique concrète. His creative practice focuses on uncovering alternative sonic possibilities in daily sounds that are often ignored or taken for granted. \nJu-An Hsieh graduated from the Gerrit Rietveld Academie in Amsterdam\, the Netherlands\, and works primarily with images. In 2024\, their exhibition The Theatre explored the impact of colonial regimes on Taiwan’s ecology and the power relations between humans and nature. As their practice evolves\, embodied memories\, sensory experiences\, and dreams connected to nature have gradually become central themes in their work. \n  \nYunpeng Li: Echoes of the dial\nThis work uses the “outdated” communication technology signal—the telephone dial tone—as its core material. Through sampling and sound processing of the DTMF during telephone dialing\, it explores the dialectical relationship between auditory memory and the disappearance of matter within the context of technological accelerationism. In today’s world where information transmission approaches zero latency\, how can those echoes that once carried the desire for communication construct a new aesthetic dimension amidst the abandoned ruins? \nAbout the artist\nYunpeng Li\, Ph.D. Associate Professor\, Master’s Supervisor & Director\, Art & Science Teaching and Research Section\, Wuhan Conservatory of Music Main research and teaching focus: Electronic music composition. His works have been selected for the International Computer Music Conference (ICMC) multiple times. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/listening-room-1-1/
LOCATION:Hamburg University of Technology\, Building A (A 0.18)\, Am Schwarzenberg-Campus 1\, Hamburg\, 21073\, Germany
CATEGORIES:11-05,Listening Room,Music
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260511T110000
DTEND;TZID=Europe/Amsterdam:20260511T180000
DTSTAMP:20260429T121824
CREATED:20260421T093948Z
LAST-MODIFIED:20260421T095126Z
UID:10000142-1778497200-1778522400@icmc2026.ligeti-zentrum.de
SUMMARY:Installation | Miles Friday: "Breathwork"
DESCRIPTION:Breathwork is a twelve-channel sound installation where loudspeakers become breathing bodies. Each loudspeaker is encased in an inflatable bag that swells and contracts in response to low-frequency drones\, forming a slow\, ever-shifting breath-like choreography. \nWithin this field of motion\, clouds of layered just intonation partials drift in and out of perception\, while low frequencies create a base of acoustic beating and Shepard tone-esque glissandos. By transforming the loudspeaker into a pneumatic pump\, Breathwork reimagines the loudspeaker as a tool for visual synthesis\, where vibrations in the air animate inflatables as kinetic sculptures—synthetic lungs whose movement create polyrhythms that can be both seen and heard. \nAll audio is generated live via SuperCollider and is running on two Bela Mini Multichannel Expanders. \nAbout the artist\nMiles Jefferson Friday is an artist who focuses on sound as his primary medium. Building new instruments\, composing music\, designing sound sculptures\, and creating immersive installations\, his practice invites us to reconsider how we hear and listen. Miles is currently an Assistant Professor of Digital Music at University of Texas at San Antonio\, holds a DMA and MFA from Cornell University\, and an MA from the Eastman School of Music. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/installation-miles-friday-breathwork-1-2/
LOCATION:Hamburg University of Technology\, Building A (Foyer)\, Am Schwarzenberg-Campus 1\, Hamburg\, 21073\, Germany
CATEGORIES:11-05,Installation
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260511T110000
DTEND;TZID=Europe/Amsterdam:20260511T180000
DTSTAMP:20260429T121824
CREATED:20260421T095644Z
LAST-MODIFIED:20260421T095644Z
UID:10000144-1778497200-1778522400@icmc2026.ligeti-zentrum.de
SUMMARY:Installation | Alessandro Anatrini: "Faulty Oracle"
DESCRIPTION:Faulty Oracle is an adaptive audiovisual installation that conjures a gloriously unreliable divinatory machine. Visitors pose questions through body language: gestures\, movements\, postures which the system interprets\, misreads\, and willfully transforms. In return\, the oracle delivers cryptic animated answers\, flickering between epiphany\, nonsense\, and hallucination. Voices stretch\, fracture\, and echo over visuals that shimmer with unstable symbols\, offering responses that feel both prophetic and utterly broken.\nThe dialogue is a masterclass in miscommunication: questions are misinterpreted\, wrong ones are amplified\, and answers rarely align with intent. The oracle becomes a mirror of ambiguity\, where meaning emerges from error\, chance\, and interpretation rather than clarity.\nBy shifting interaction from language to the body\, Faulty Oracle gleefully dismantles any expectation of precision in human-machine exchange. It invites participants into a space of playful fallibility\, reframing prophecy as a dance of uncertainty and imagination. \nAbout the artist\nAlessandro Anatrini (1983) is a composer\, new media artist\, and developer with a background in musicology\, composition\, and electronic music. Completed a M.A. in multimedia composition at HfMT Hamburg and a PhD in artistic research focused on machine learning in adaptive multimedia environments. His work has\nbeen presented by Ensemble Intercontemporain\, Klangforum Wien\, Symphoniker Hamburg and at festivals including Manifeste\, HCMF\, Impuls\, and Blurred Edges. Frequently invited to speak at conferences such as SMC\, TENOR\, and AIMC. Collaborates with institutions like UdK Berlin and the Digital Stage Foundation. Lecturer on machine learning topics at HfMT since 2018\, from 2024 he is Professor of Multimedia at the Conservatorio of Piacenza (Italy). \n 
URL:http://icmc2026.ligeti-zentrum.de/event/installation-alessandro-anatrini-faulty-oracle-1/
LOCATION:Hamburg University of Technology\, Building A\, Videospace I\, Am Schwarzenberg-Campus 1\, Hamburg\, 21073\, Germany
CATEGORIES:11-05,Installation
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260511T110000
DTEND;TZID=Europe/Amsterdam:20260511T180000
DTSTAMP:20260429T121824
CREATED:20260421T100042Z
LAST-MODIFIED:20260423T171602Z
UID:10000153-1778497200-1778522400@icmc2026.ligeti-zentrum.de
SUMMARY:Installation | Dahye Seo: "Unscored"
DESCRIPTION:A camera installed on a balcony captures the live sky\, converting it into generative sound in real time. The trajectories of birds crossing the frame are translated into piano tones\, forming unpredictable melodies. The time spent watching the sky—waiting for the next sound—becomes part of the work. \nAbout the artist\nDahye Seo (b. 1985\, South Korea) is a multimedia artist based in Berlin. She explores the movement of living organisms and environmental phenomena through sound\, data\, and interactive installations\, creating immersive experiences that bridge perception and natural patterns. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/installation-dahye-seo-unscored-1/
LOCATION:Hamburg University of Technology\, Building A\, Videospace II\, Am Schwarzenberg-Campus 1\, Hamburg\, 21073\, Germany
CATEGORIES:11-05,Installation
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260511T110000
DTEND;TZID=Europe/Amsterdam:20260511T180000
DTSTAMP:20260429T121824
CREATED:20260421T191718Z
LAST-MODIFIED:20260427T105322Z
UID:10000193-1778497200-1778522400@icmc2026.ligeti-zentrum.de
SUMMARY:Installation | Bill Parod & Teresa Parod: "The Elephants of Trianon"
DESCRIPTION:The Elephants of Trianon is an augmented-reality audiovisual installation that extends a series of public murals into an interactive spatial sound environment. The original work consists of ten adjacent murals painted on garage doors in a public alley in Evanston\, Illinois\, USA. These form part of a larger international body of public work by the artist\, Teresa Parod. For the International Computer Music Conference\, the project is presented as a free-standing installation at TU Hamburg-Harburg using large construction-fence banners which approach the full-size of the garage door murals. \nUsing a custom mobile app\, visitors’ devices recognize each mural and anchor a corresponding three-dimensional audiovisual scene in space. As visitors move through the installation and activate additional murals\, their scenes accumulate and blend\, creating a continuously evolving environment\, rather than a sequence of isolated works. The installation therefore functions as a spatial composition shaped by listener movement\, attention\, and duration of engagement. \nThe soundscape combines field recordings made in Bali\, New Orleans\, and Chicago with instrumental layers and voices in ten languages. Animated three-dimensional forms—birds\, bats\, dogs\, elephants\, rabbits\, and celestial figures—appear among the murals\, along with subtle video textures and custom shaders that bring painted elements into motion. Some virtual elements are not confined to a single mural but move throughout the installation space\, responding to the physical layout and dimensions of the exhibition environment. \nThe project suggests a scalable model for mobile\, spatially responsive sound installations in galleries and public spaces. The software framework and mobile application used in The Elephants of Trianon have been developed through prior public installations and gallery presentations and are designed to function across a range of exhibition formats\, from outdoor murals to indoor projection and free-standing display structures. The ICMC installation demonstrates how augmented reality can be used not only as a visual medium\, but as a platform for spatial audio composition and listener-driven musical form. \nAbout the artists\nBill Parod (b. 1954\, Chicago USA) is a composer\, improviser (violin)\, and software developer who works on interactive spatial music\, audio poetry\, image reactive augmented reality\, and living music mobile apps. His work has appeared in Chicago at Elastic Arts\, Experimental Sound Studio\, and the Jay Pritzker Pavilion; Burning Man\, Nevada USA; New York University NYC\, and Ircam in Paris\, France. \nTeresa Parod (b. 1957\, Alton IL\, USA) paints vibrant\, luminous oil paintings and murals\, celebrating life through dichotomies such as light and shadow\, warm and cool and complementary colors. Her landscapes invoke mythological destinations inviting the viewer to journey there.\nShe has created over one hundred works of public art in the United States\, Cuba\, Bali\, Nepal\, and Istanbul. In Cuba\, she was honored to work with mosaicist José Fuster\, whose work inspired her creation of art in unexpected and underused spaces.\nShe lives in Evanston\, Il with her husband\, Bill Parod. Together they have collaborated on several exhibitions and performances and multichannel visual and musical art.\nShe also teaches art history at Oakton college\, does an annual century bike ride and studies and performs classic Indonesian dance. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/installation-bill-parod-teresa-parod-the-elephants-of-trianon-1/
LOCATION:Hamburg University of Technology\, Outdoor Area II\, Am Schwarzenberg-Campus 1\, Hamburg\, 21073\, Germany
CATEGORIES:11-05,Installation
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260511T110000
DTEND;TZID=Europe/Amsterdam:20260511T180000
DTSTAMP:20260429T121824
CREATED:20260421T192918Z
LAST-MODIFIED:20260422T141802Z
UID:10000197-1778497200-1778522400@icmc2026.ligeti-zentrum.de
SUMMARY:Installation | Finlay Graham: an egg with fouled neurons
DESCRIPTION:an egg with fouled neurons is a variable duration installative performance on a synthesizer fully coded by the composer in MaxMSP which utilizes a post-tonal framework to transform large harmonic sets\, preserving the fidelity of harmonic intervals while transforming harmonic identity\, allowing for movement through a complex harmonic pattern. Through a 1-4 hour long performance\, this unbound harmony is explored within a spatialized environment. “The egg lacks organs and cellular structure\, but it could be alive. When vibrated\, it would notice the each simplified frequency. If you apply equal pressure to all sides\, it doesn’t break\, but moments of concentration are dangerous. If permeated and submerged\, it’s unclear where the egg begins\, and what is inside.\nThis work is structurally built around the frequency at 440 Hz (A4)\, but temporally it moves moves through 8 sections: \n1. the breath\n2. Subconscious initiation\n3. Embodiment/mirroring\n4. silence\n5. onset\n6. liberation\n7. oneness and death\n8. contraction \nAbout the artist\nFinlay Graham (b. 2005\, he/him)\, is an American composer and educator based in Asheville\, North Carolina and Oberlin\, Ohio whose work is inspired by nature\, spirituality\, emotion\, and intimacy.\nGraham is currently enrolled at Oberlin College and Conservatory studying Music Composition and Neuroscience with a minors in Music and Cognition and TIMARA (Technology in Music and Related Arts). He currently studies composition under Jesse Jones. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/installation-finlay-graham-an-egg-with-fouled-neurons/
LOCATION:Hamburg University of Technology\, Building N (Foyer)\, Eißendorfer Straße 40\, Hamburg\, 21073\, Germany
CATEGORIES:11-05,Installation
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260511T110000
DTEND;TZID=Europe/Amsterdam:20260511T180000
DTSTAMP:20260429T121824
CREATED:20260423T170857Z
LAST-MODIFIED:20260423T171341Z
UID:10000230-1778497200-1778522400@icmc2026.ligeti-zentrum.de
SUMMARY:Installation | Zhao Jiajing: "Omniopticon"
DESCRIPTION:Omniopticon invites visitors to step inside a constantly shifting field of sound. Scattered throughout the space are wireless loudspeakers. They are not fixed in place: you are free to pick them up\, move them\, tilt them\, rotate them\, or leave them somewhere new. Every gesture reshapes the acoustic environment\, allowing the installation to unfold differently with each visitor’s presence.\nThe work takes its name from the idea of the omniopticon: a condition in which everyone can observe and be observed\, a feature of today’s social-media-saturated world. Rather than presenting this as a system of surveillance\, Omniopticon turns it into a shared\, exploratory environment. Visibility becomes audibility\, and moving a loudspeaker becomes a way\nof revealing or obscuring sonic perspectives.\nAs the speakers change position\, the sound re-forms across the room. What you hear is shaped not only by the architecture\, but also by the placement of the speakers and by the choices of those around you. No two moments are alike. The installation becomes a collective instrument whose behaviour reflects the actions and curiosity of its participants.\nYou are invited to explore. Try moving a single speaker or coordinating with others. Follow a sound across the room\, or gather several speakers into a cluster. Listen to how the sonic space expands\, fragments or gathers as you intervene. Notice how your movements influence – and are influenced by – other people in the space.\nIn Omniopticon\, space is not a backdrop but the material of the artwork itself: a social\, physical and acoustic terrain that shifts with every action. It is both an immersive environment and a gentle social experiment\, prompting reflection on how we navigate shared spaces\, how we shape them\, and how they in turn shape us. Your participation completes the piece. \nAbout the artist\nZhao Jiajing (赵嘉旌; family name–given name) is a London-based electroacoustic composer and sound artist from Beijing. \nZhao’s practice spans acousmatic music\, sound installation\, performance\, and new media. Since 2019\, he has focused on spatial sound\, creating multichannel compositions and installations. His work explores questions of time\, technological mediation\, and our evolving relationship with both the digital and natural worlds. He frequently collaborates across disciplines\, working with practitioners and researchers in visual art\, theatre\, science\, and technology. \nZhao’s work has been featured at major international venues and festivals such as Ars Electronica (AT)\, IRCAM (FR)\, ZKM Karlsruhe (DE)\, ICMC (Int’l)\, SICMF (KR)\, GMEM (FR) and ORF musikprotokoll (AT).  He has received recognitions and commissions from the ISCM British Section\, Musica Nova\, Aesthetica × Audible\, the Shanghai International Arts Festival\, The Engine Room\, Musicacoustica\, Royal College of Art × LG Display\, IEM Graz\, among others. \nZhao holds an MA in Information Experience Design from the Royal College of Art and is currently pursuing a PhD at the University of the Arts London (CRiSAP)\, supervised by Adam Stanović. He is also a mentor and visiting lecturer for the MA in Designing Audio Experiences at University College London. \n  \n***\nOminiopticon uses the Snappi speakers system: a low cost wireless multichannel system developted by Marcus Weseloh and Jacob Sello at the ligeti center‘s Innolab.
URL:http://icmc2026.ligeti-zentrum.de/event/installation-zhao-jiajing-omniopticon-1/
LOCATION:Hamburg University of Technology\, Building A\, Videospace III (A 3.35.1)\, Am Schwarzenbergcampus 1\, Hamburg\, 21073\, Germany
CATEGORIES:11-05,Installation
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260511T120000
DTEND;TZID=Europe/Amsterdam:20260511T200000
DTSTAMP:20260429T121824
CREATED:20260421T092055Z
LAST-MODIFIED:20260421T092055Z
UID:10000133-1778500800-1778529600@icmc2026.ligeti-zentrum.de
SUMMARY:Installation | Windisch\, Peng & v. Coler: "MESH"
DESCRIPTION:MËSH is an immersive\, networked music and media system that blends interactive installation with live performance. Developed since 2019\, MËSH uses a distributed array of interactive nodes to create a responsive audiovisual environment. Depending on the venue\, installations range from 4 to 16 interconnected nodes communicating over a wireless local network. \nEach node processes real-time movement captured by its camera using custom computer-vision software. These motion signals drive local sound generation in SuperCollider and trigger sample playback drawn from a curated library of field recordings and media fragments. Sounds are spatialized across the network\, forming a shared\, evolving soundscape shaped directly by audience interaction. \nMËSH also functions as a performance instrument: synchronized graphical scores are displayed across all nodes\, enabling musicians to perform within the same reactive ecosystem. This latest iteration continues MËSH’s exploration of distributed creativity and collaborative sensing. \nAbout the artist\nHenry Windish is a graduate student at Georgia Tech. His work focuses on computer music systems\, audio software development\, and collaborative tools for performance and education. He contributes to the design and implementation of networked performance platforms and supports projects involving SuperCollider\, audio networking\, computer vision\, and interactive media. Previously\, he studied electrical engineering at Washington University in St. Louis.\nTristan Peng is a PhD student at Georgia Tech exploring interaction design\, spatial audio\, and sonification; previously studying at CCRMA at Stanford University. His work aims to create accessible\, artful\, and interactive ways for people to experience sound. His current projects investigate how data can become a medium for participation and how immersive audio spaces can evoke emotion and understanding in ways that traditional visualizations cannot.\nHenrik von Coler is a musician and researcher. In 2024 he founded the Lab for Interaction and Immersion at Georgia Tech. Before that he was the director of the Electronic Music Studio at TU Berlin. Henrik’s research explores interface design\, algorithms for sound generation and experimental concepts for composition and performance. In 2017 he founded the Electronic Orchestra Charlottenburg to explore music interaction on immersive loudspeaker systems. He has since worked on ways to enhance how musicians and audiences experience spatial music and sound art. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/installation-windisch-peng-v-coler-mesh/
LOCATION:Stellwerk Hamburg\, Hannoversche Straße 85\, Hamburg\, 21079\, Germany
CATEGORIES:11-05,Installation
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260511T120000
DTEND;TZID=Europe/Amsterdam:20260511T200000
DTSTAMP:20260429T121824
CREATED:20260421T093204Z
LAST-MODIFIED:20260422T144350Z
UID:10000135-1778500800-1778529600@icmc2026.ligeti-zentrum.de
SUMMARY:Installation | Adriano C. Monteiro & Rafaela B. Pires: "DE/RE:GENERATION"
DESCRIPTION:De/Re:Generation stems from a speculative question: would cicadas sense acoustic information during the up to 17 years they live underground\, before emerging from the soil for a brief adult phase marked by intense acoustic display? From this perspective\, the installation approaches sound not only as an auditory phenomenon\, but as something sensed through the body\, making vibration and tactile perception central to the experience.\nAt the core of the work are rounded\, shell-like sculptures molded from biodegradable cassava-starch bioplastics. These forms visually echo cicada nymphs and exuviae: fragile\, hollow exoskeletons that signal absence\, transfor-mation\, and continuation. Like the remnants left after metamorphosis that nourish other species\, the installation’s ma-terials participate in an ongoing process of regeneration: they deform over time\, respond to humidity and dryness\, and become alternately more rigid or more flexible\, like a living skin in dialogue with the environment. Integrated as touch interfaces\, the bioplastic sculptures function as tactile sensing surfaces that mediate the interaction with the sound en-vironment formed by vibrating surfaces and low-frequency sound fields\, that allude to the cicada’s aboveground and underground sonic worlds\, blurring boundaries between tactile and auditory modes of perception\, organic material and inorganic technological systems. \nAbout the artists\nAdriano Monteiro is a music composer and researcher. His work focus on the convergence of art\, science and technology for creative processes\, performance and analysis of music. He is the author of eletroacustic and intermedia works in different media and formats\, such as acousmatic\, live electronics\, audiovisual performances and installations\, network and telematic music\, and also author and coauthor of several articles concerning creative processes in music and musical analysis. Adriano Monteiro is an associate professor of Music Composition at the School of Music and Scenic Arts of Federal University of Goiás (EMAC/UFG). He studied music composition at the University of Campinas (UNICAMP) and holds a PhD in music from the same institution. \nRafaela Blanch Pires is a designer and professor at the Scenic Arts department at the Federal University of Goiás (Brazil). Her background is in fashion design\, MA in “Fashion and Textiles” and PhD in “Design and Architecture” (São Paulo University). Between 2015 and 2016 she worked as a doctoral visiting student at the “Wearable Senses Lab” at the Technical University of Eindhoven (Holland). She experiments with the areas of bio-materials\, digital fabrication\, special effects make-up\, costume design and electronics. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/installation-adriano-c-monteiro-rafaela-b-pires-de-regeneration-1/
LOCATION:Stellwerk Hamburg (Lounge)\, Hannoversche Str. 85\, Hamburg\, 21079\, Germany
CATEGORIES:11-05,Installation
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260511T133000
DTEND;TZID=Europe/Amsterdam:20260511T150000
DTSTAMP:20260429T121824
CREATED:20260421T084731Z
LAST-MODIFIED:20260427T133301Z
UID:10000077-1778506200-1778511600@icmc2026.ligeti-zentrum.de
SUMMARY:Lunch Concert 1A
DESCRIPTION:After the Opening Concert of ICMC HAMBURG 2026\, the regular music program begins today. This first Lunch Concert offers an insight into the current international computer music scene. What makes this event special is the personal presence of the artists: the composers are either on stage themselves or have brought the musicians they wrote for with them to Hamburg.\nIt is a program of short distances between idea and sound. The works demonstrate how diverse collaboration between humans and technology can be today—from the classical solo clarinet to interactive formats. \n  \nProgram Overview\nTyche\nSever Tipei \nAIKYAM\nClaudia Robles Angel \nHOTPO\nMichael Edwards \nTessellae\nRodrigo Cadiz and Thierry Miroglio \nThe Center of the Universe\nSunhuimei Xia and Sunhuimei Xia \n  \nAbout the pieces & artists\nSever Tipei: Tyche \nTyche for Bb clarinet and fixed media is a composition generated with original software for Computer-assisted (algorithmic) Composition and sound design developed by the composer and his collaborators.\nDivided into four main sections of 2-3-1-2 minutes\, the work utilizes stochastic distributions\, Markov chains\, sieves and Just Intonation as well as detailed control of spectra\, FM transients\, spatialization and reverberation. A basic framework of precise proportions and deterministic procedures are complemented by random details governed by Tyche\, the goddess of fortune\, chance\, providence and fate. \nAbout the artist\nA composer and a pianist\, Sever Tipei was born in Bucharest\, Romania\, and immigrated in the United States in 1972. He holds degrees in composition from the University of Michigan (DMA) and piano performance from Bucharest Conservatory (Diploma). Tipei taught at Chicago Musical College of Roosevelt University and\, between 1978 and 2021\, at the University of Illinois at Urbana-Champaign School of Music. After retirement Tipei continues to teach in the School of Information Sciences where he also directs the “James W. Beauchamp Computer Music Project”. He is also a National Center for Supercomputing Applications Faculty Affiliate. Between 1993 and 2003 Tipei was a Visiting Scientist at Argonne National Laboratory where he worked on the sonification of complex scientific data.\nMost of his compositions were produced with software he designed: MP1 – a computer-assisted composition program first used in 1973\, DIASS – for sound synthesis and M4CAVE – software for the visualization of music in an immersive virtual environment. More recently\, Tipei and his collaborators have developed DISSCO\, software that unifies computer-assisted (algorithmic) composition and (additive) sound synthesis into a seamless process. His compositions have been performed in the US\, Australia\, Brazil\, France\, Germany\, Italy\, Portugal\, Romania\, Spain\, United Kingdom and Taiwan. \n  \nClaudia Robles Angel: AIKYAM \nAIKYAM is a real-time surround sound work for 1 performer and 5 to 6 participants (audience) inspired by Kuramoto’s mathematical model of the spontaneous order or synchronisation system in nature\, e.g. fireflies\, heart rates or humans clapping their hands together. The term AIKYAM is based on the Sanskrit word: ऐक्यम\, and it means unity or harmony. \nAbout the artist\nBorn in Bogotá (Colombia)\, living in Cologne (Germany). Composer\, sound and new media artist\, her work covers different aspects of visual and sound art\, extending from acousmatic and audio-visual compositions to interactive performances/installations using biomedical signals and AI (Artificial Intelligence).\nShe has been Artist-in-residence in several outstanding institutions around the globe. In 2022 was awarded with an honorary mention by the GIGA Hertz award at ZKM Center.\nHer work has been performed and exhibited worldwide e.g. at ZKM\, ISEA; KIBLA Centre Maribor\, CAMP Festival – 55 Venice Biennale Salon Suisse\, ICMC; New York City Electroacoustic Music Festival; NIME; STEIM; Harvestworks Digital Arts Center NYC\, Heroines of Sound Berlin; Audio Art Festival Cracow; MADATAC Madrid; Athens Digital Art Festival ADAF\, CMMAS Morelia; Beast FEaST Birmingham; ICST ZHdK Zurich; RE:SOUND Aalborg; Electric Spring Festival Huddersfield; AI Biennal Essen; at the Centre for International Light Art Unna and more recently at Acht Brücken Festival Cologne and at the Philharmonie Essen. \nwww.claudearobles.de \n  \nMichael Edwards: HOTPO \nHinting at something a little more coarse\, the title HOTPO is in fact a completely innocent reference to the Collatz Conjecture. This mathematical proposition\, also known by other names\, refers to a succession of numbers called the hailstone sequence (or wondrous numbers)\, because their values usually ascend and descend like hailstones in a cloud.\nThough the mathematical proof of the conjecture is complex\, the proposition is very simple: Take any positive whole number; if it is even\, divide it by two; if it is odd\, multiply it by three and add one (hence the acronym Half Or Three Plus One: HOTPO); repeat the process with the result and you will find that no matter which number begins the process\, you will always\, given enough iterations\, reach one.\nThe algorithm is easy to programme and experiment with plus it produces rather nice images when given different starting numbers and plotted over various iterations. I used the algorithm in this piece to generate section lengths and repeated structures from nine basic rhythm sequences\, hence my sequence was 9 28 14 7 22 11 34 17 52 26 13 40 20 10 5 16 8 4 2 1. The piece alternates sections opposing mixed materials (odd section numbers) with obsessively repeated material (even). The numbers are also used for the generation of the sound files triggered during the performance. Despite the rather abstract nature of the generative procedure\, the results of the algorithms were developed intuitively and the piece as a whole arises out of and proceeds through a maelstrom of events fitting to the imagery of a hailstorm.\nHOTPO was commissioned by Henrique Portovedo for the World Saxophone Congress 2018 in Zagreb. That version included an ensemble. In 2020 I reworked the sound files to include MIDI data from the ensemble and made a solo + computer version. This was revised in 2024. \nAbout the artist\nI’m a composer\, improvisor\, software developer\, and since 2017 Professor of Electronic Composition at ICEM\, Folkwang University of the Arts\, Essen\, Germany.\nI’m the programmer of the slippery chicken algorithmic composition package. My compositional interests lie mainly in the development of structures for hybrid electro-instrumental pieces through the integration of algorithmically produced scored materials with similarly generated computer-processed sound. I also improvise on laptop\, saxophones\, and MIDI wind controller\, performing for instance at the 2008 Montreaux Jazz Festival.\nI studied composition at Bristol University with Adrian Beaumont (BA\, MMus) and privately with Gwyn Pritchard. In 1991 I moved to the US for further studies in computer music with John Chowning at CCRMA\, Stanford University (MA\, Doctor of Musical Arts). Whilst studying there I also worked at IRCAM\, Paris\, with a residence grant at Cité des Arts.\nDuring 1996-7 I was a consultant software engineer in Silicon Valley. I developed a Document Recognition System used in several US hospitals. In 1997 I was appointed Lecturer in Music Theory at Stanford but later that year moved to Salzburg\, Austria. I was Guest Professor at the Universität Mozarteum until I left to teach at the University of Edinburgh in 2002. \n  \nRodrigo Cadiz: Tessellae \nTessellae for percussion and live electronics unfolds as a mosaic of small rhythmic tiles laid in time by a single performer. The percussion writing is built on Euclidean rhythmic principles\, patterns that distribute events as evenly as possible\, expanded through asymmetric tuplets (notably groups of three and five)\, repetitions\, and carefully placed silences that create a strong sense of anticipation from phrase to phrase. Only one or two instrumental lines sound at a time\, allowing the listener to perceive each gesture as a discrete tessera within a larger rhythmic surface. The live electronics\, built on RAVE\, a real-time variational autoencoder developed at IRCAM and trained on a corpus of percussion sounds\, listen to the performer and respond by reshaping timbre and resonance in the moment\, extending and refracting the acoustic material without fixing it in advance. The result is a dialogue between strict rhythmic architecture and fluid sonic transformation\, where expectation\, delay\, and renewal are central expressive forces. Tessellae was composed for Thierry Miroglio. \nAbout the artists\nRodrigo F. Cádiz is a composer\, researcher and engineer. He studied composition and electrical engineering at the Pontificia Universidad Católica de Chile (UC) in Santiago and he obtained his Ph.D. in Music Technology from Northwestern University. His compositions\, consisting of approximately 70 works\, have been presented at several venues and festivals around the world. His catalogue considers works for solo instruments\, chamber music\, symphonic and robot orchestras\, visual music\, computers\, and new interfaces for musical expression. He has received several composition prizes and artistic grants both in Chile and the US. He has authored around 70 scientific publications in peer reviewed journals and international conferences. His areas of expertise include sonification\, sound synthesis\, audio digital processing\, computer music\, composition\, new interfaces for musical expression and the musical applications of complex systems. In 2018\, Rodrigo was a composer in residence with the Stanford Laptop orchestra (SLOrk) at the Center for Computer-based Research in Music and Acoustics (CCRMA)\, and a Tinker Visiting Professor at Stanford University. In 2019\, he received the prize of Excellence in Artistic Creation from UC\, given for outstanding achievements in the arts. In 2024\, he was a visiting researcher at the Orpheus Instituut in Belgium. He is currently full professor at the Music Institute and Electrical Engineering Department of UC. \nSince several years Thierry Miroglio is realizing a brilliant solo career where he is invited to give in more than forty countries recitals and solo concerts in numerous venues and prestigious Festivals such as Salzburg\, Philharmonie Berlin\, New York\, Wien Konzerthaus\, Boston\, Besançon\, San Francisco\, Munich\, Schleswig Holstein\, Madrid\, Rom\, Tokyo\, Milan\, Zagreb\, Nice\, Köln\, Paris\, Hamburg\, Athen\, Sao Paulo\, Lisbon\, Monte Carlo Printemps des Arts\, Hong Kong\, Buenos Aires Colon Theater\, Genève\, Brugge Concertgebouw\, Bucarest Atheneum\, Peking\, Amsterdam\, Linz Brucknerhaus\, Rio\, Darmstadt\, Helsinki\, Johannesburg\, Mexico\, Seoul\, Shanghai\, Moscow\, Biennal of Venice … \n  \nSunhuimei Xia and Sunhuimei Xia: The Center of the Universe\nThe Center of the Universe\, an algorithmic music work integrated with interactive technology\, draws inspiration from the artist’s immersive impressions of New York City gleaned through multiple on-site visits. Standing atop the Empire State Building\, the artist perceived the metropolis as a dynamic global nexus where people of diverse cultural and ethnic backgrounds converge\, weaving a vibrant\, multifaceted urban tapestry that resonates with the energy of an interconnected world. Taking the phrase “The Center of the Universe” as its foundational sonic material\, the work delivers innovation through experimental multilingual vocal manipulation—deploying the core line in English\, Spanish\, French\, German\, Italian\, Russian\, Chinese\, Japanese\, Korean\, and Thai—with all vocal textures sourced from sampled macOS AI voices\, blending computational sound synthesis with linguistic diversity to push the conventional boundaries of vocal-based algorithmic composition. It achieves nuanced translation by converting the artist’s subjective perceptual experience of the city into an audible\, interactive sonic landscape\, while translating the abstract idea of cross-cultural convergence into tangible musical logic via the layered interplay of multilingual vocal samples. Further embodying participation\, the piece adopts wireless Nintendo Wiimote Controllers as its interactive performance interface\, enabling the performer to stand at the “center” of the stage and manipulate the musical structure in real time; this design redefines the dynamic between creator\, performer\, and audience\, turning the performance into a collaborative process where physical movements directly shape sonic evolution. \nAbout the artist\nSunhuimei Xia\, Associate Professor of Art and Technology at Wuhan Conservatory of Music’s Composition Department\, Dr. Xia holds a Master’s from Johns Hopkins University and a Doctorate from the University of Oregon (U.S.). Mentored by renowned composers Jian Feng\, Jian Liu\, Geoffrey Wright\, and Jeffrey Stolet.\nAs central and western China’s first DMA in data-driven musical instrument composition and performance\, this accomplished composer focuses on computer music creation and music-technology integration\, with core interests in interactive data-driven instruments\, algorithmic composition\, and data sonification.\nHonored as a Music Entrepreneurship and Innovation Talent by the Ministry of Culture and an Outstanding Young and Middle-Aged Literary and Art Talent by Hubei Federation of Literary and Art Circles\, her works won the Hubei Golden Bianzhong Music Award\, with over 10 pieces showcased at top global events including ICMC\, ISMIR\, NIME\, SMC\, SEAMUS\, NYCEMF\, EMM\, IRCAM\, WOCMAT and Musicacoustica-Beijing.\nShe released China’s first DVD album of data-driven instrument works\, published by Shanghai Music Publishing House and Shanghai Literature & Art Audio-Video Electronic Publishing House. She guided students to secure 20+ domestic and international awards\, leads provincial projects and participates in the Ministry of Education’s Humanities and Social Sciences Youth Fund Project\, driving music-technology innovation.
URL:http://icmc2026.ligeti-zentrum.de/event/concert-1a/
LOCATION:Hamburg University of Technology\, Building I\, Audimax 2\, Denickestraße 22\, Hamburg\, 21073\, Germany
CATEGORIES:11-05,Concert,Music
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260511T153000
DTEND;TZID=Europe/Amsterdam:20260511T160000
DTSTAMP:20260429T121824
CREATED:20260421T083405Z
LAST-MODIFIED:20260421T083405Z
UID:10000146-1778513400-1778515200@icmc2026.ligeti-zentrum.de
SUMMARY:Introduction & Welcome to ICMC HAMBURG 2026
DESCRIPTION:ICMC HAMBURG 2026 welcomes this year’s conference community to Hamburg. On this first full conference day\, the team shares a few words about the week’s program before Robert Henke gives his keynote about his life as a toolmaking artist. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/introduction-welcome-icmc-hamburg-2026/
LOCATION:Hamburg University of Technology\, Building H\, Audimax 1\, Am Schwarzenberg-Campus 5\, Hamburg\, 21073\, Germany
CATEGORIES:11-05,General
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260511T160000
DTEND;TZID=Europe/Amsterdam:20260511T170000
DTSTAMP:20260429T121824
CREATED:20260421T082545Z
LAST-MODIFIED:20260422T143423Z
UID:10000078-1778515200-1778518800@icmc2026.ligeti-zentrum.de
SUMMARY:Keynote | Robert Henke: "My Life as a Toolmaking Artist: A Personal Reflection on the Challenges and Rewards of Building My Own Instruments"
DESCRIPTION:I had the privilege of witnessing—and participating in—the historic shift of computer-generated music from an academic pursuit to something accessible in a bedroom studio. I embraced this opportunity wholeheartedly\, using environments like IRCAM’s Max to explore new sonic and structural territories. This allowed me to move beyond the constraints of physical instruments I could afford\, the limitations of my own hands\, and the rigid mental models of established MIDI sequencing software. \nDriven by a desire to achieve unique and personal results with limited computing power and knowledge\, I came to value the creative freedom found in self-imposed limitations. This experience led to a deep appreciation for simple yet powerful concepts\, algorithms\, and interfaces. \nSince the beginning\, my music emerged from an iterative process: building instruments\, being surprised and inspired by the results\, and then revising the instruments in response. The insights I gained not only informed a successful commercial product but\, more importantly\, shaped my identity as an artist and my approach to computer-based creation. \nIn my talk\, I will examine selected works of mine from a critical toolmaker’s perspective: did I reinvent the wheel again\, or did I achieve an artistic outcome which justifies the effort? \n  \nRobert Henke\nRobert Henke is an artistic toolmaker and a toolmaking artist\, exploring the creative potential of technology. His practice spans musical compositions\, concerts\, large-scale audiovisual installations\, and computer graphics. His work frequently involves inventing custom algorithms and machines\, blending rigid structure with controlled randomness. His music channels the raw\, repetitive energy of techno culture\, as well as the intricate details and textures of abstract contemporary works. His visual art builds on the legacies of Minimal Art and early computer graphics pioneers.\nSince 1995\, he has recorded and performed as Monolake\, initially a duo with Gerhard Behles and\, since 1999\, a solo project. His artistic collaborations include works with Marko Nikodijevic\, Tarik Barri\, and Christopher Bauder\, among others.\nHenke is also a co-creator of Ableton Live\, software that revolutionised music production and electronic performance. He lectures and writes on sound and creative computing\, and has taught at institutions such as the Berlin University of the Arts\, Stanford’s Center for Computer Research in Music and Acoustics (CCRMA) and IRCAM in Paris.\nHis installations\, performances\, and concerts have been presented at leading venues worldwide\, including Tate Modern\, Centre Pompidou\, PS1\, MUDAM\, MAK\, Palazzo Grassi\, and countless music festivals. \nMore about Robert Henke: www.roberthenke.com \n 
URL:http://icmc2026.ligeti-zentrum.de/event/keynote-robert-henke/
LOCATION:Hamburg University of Technology\, Building H\, Audimax 1\, Am Schwarzenberg-Campus 5\, Hamburg\, 21073\, Germany
CATEGORIES:11-05,Keynote
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260511T170000
DTEND;TZID=Europe/Amsterdam:20260511T190000
DTSTAMP:20260429T121824
CREATED:20260421T091309Z
LAST-MODIFIED:20260423T185456Z
UID:10000148-1778518800-1778526000@icmc2026.ligeti-zentrum.de
SUMMARY:Workshop | Serge Lemouton\, Jacques Warnier\, Malena Fouillou\, and Laurent Pottier: Practical Documentation and Collaborative Preservation using Antony
DESCRIPTION:The goal of this hands-on workshop is to show\, for the first time in an international context\, the Antony system\, now in its final state and fully functional.\nThe Antony platform provides a structured system for archiving\, documenting\, and accessing mixed music works materials to ensure long-term preservation and reuse. The Antony project addresses the difficulty of preserving artistic works that rely on evolving and often incompatible technologies. It highlights how the survival of these works depends on a small group of experts capable of updating and maintaining their digital components.\nAt the end of this workshop\, the participants will be able to use the database to document\, distribute and preserve their own creations. \n  \nRequirements\nThis workshop primarily addresses composers\, computer music designers or performers\, but it can also be of interest for media artists\, musicologists\, documentalists and music publishers.\nThe participants should come with the media related to an existing artistic project of their own that they wish to editorialize and preserve. \n  \nWorkshop registration\nPlease register via Pretix in order to participate in the workshop. There are no additional costs.  \n  \n\nAbout the workshop facilitators\nSerge Lemouton \nComputer Music Designer – Institut de Recherche et Création Acoustique/Musique – Centre Georges Pompidou (IRCAM-CGP) \nSince 1992\, Serge Lemouton works as a computer music designer at IRCAM\, collaborates with researchers to develop computer tools and has taken part in the production and public performances of numerous composers’ musical projects. He is currently working on score following systems\, analysis of instrumental gesture and constraint programming for computer assisted composition. His current research work leads him to study the transmission and preservation of the computer music repertoire. \n  \nJacques Warnier \nResearch Engineer\, Ministry of Culture – Computer Music Realizer (RIM)\, Conservatoire National Supérieur de Musique et de Danse de Paris (CNSMDP) \nSince 2007\, Jacques Warnier has supported the composition and new technologies class at CNSMDP\, producing concerts and performing live electronics for mixed repertoire works. After earning the Saint-Etienne Master’s degree in Computer Music Design in 2015\, he joined the Ministry of Culture as a research engineer in 2016.\nHis role combines musicianship and engineering to create the artistic and technical conditions required for performing 20th- and 21st-century music involving audio-digital technologies. His research focuses on making this repertoire accessible to students: curating works by instrument\, acquiring scores and electronic parts\, cataloging them in the Hector Berlioz media library\, and preserving or reconstructing electronic components.\nHe is a member of the AFIM working group on “Collaborative Archiving and Creative Preservation” (since 2018)\, now “Antony\,” and participates in the Humanum consortium for digital musicology (Musica2) since 2022. \n  \nMalena Fouillou \nAn acoustic engineer and computer music producer\, Malena has had a wide-ranging career. After completing her higher education studies in acoustics\, she joined Ircam in 2022 and graduated with a master’s degree in ATIAM (Acoustics\, Signal Processing\, Computer Science for Music). It was only natural that she joined the Next ensemble of the Paris Conservatory\, in partnership with the Ensemble Intercontemporain. This training allowed her to study with distinguished RIMS professors such as Arshia Cont\, Augustin Müller\, and Andrew Gerszo\, and to perform works by Marco Stroppa\, Pierre Boulez\, Martin Matalon\, and others. Currently pursuing her PhD at Paris 8\, her research focuses on qualitative and\nquantitative descriptions of the spatiality of sound. She is part of a working group composed of Serge Lemouton (Ircam)\, Jacques Warnier (CNSMDP)\, Laurent Pottier (ECLLA-UJM) on the Antony project\, a collaborative platform for the preservation and sharing of musical heritage using digital technologies. \n  \nLaurent Pottier \nProfessor of Musicology & Computer Music at Jean Monnet University (Saint-Etienne-France)\, ECLLA laboratory \nLaurent Pottier is a professor of Musicology & Computer Music at UJM (Saint-Etienne University). He is the headmaster of the RIM (Réalisateur en Informatique Musicale / Computer Music Producer) professional Master and of the DIGICREA (Digital Creativity – Arts & Sciences) international EMJM Master. His research at the ECLLA laboratory\, Saint-Etienne University involves music using electronic and digital technologies. He taught at Ircam (1992-1996)\, then\nheaded the research department at GMEM in Marseille (1997-2005). As a RIM\, he has worked with many composers and in particular with J.-B. Barrière\, J. Chowning\, T. De Mey\, A. Liberovicci\, C. Maïda\, A. Markeas\, F. Martin\, T. Murail\, J.-C. Risset\, F. Romitelli\, K.T. Toeplitz. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/workshop-serge-lemouton-et-al-practical-documentation-collaborative-preservation-using-antony/
LOCATION:Hamburg University of Technology\, Building H (H 0.02)\, Am Schwarzenberg-Campus 5\, Hamburg\, 21073\, Germany
CATEGORIES:11-05,Workshop
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260511T190000
DTEND;TZID=Europe/Amsterdam:20260511T210000
DTSTAMP:20260429T121824
CREATED:20260421T085527Z
LAST-MODIFIED:20260427T085012Z
UID:10000079-1778526000-1778533200@icmc2026.ligeti-zentrum.de
SUMMARY:Evening Concert 1B
DESCRIPTION:This evening concert marks a special collaboration between the international ICMC community and Hamburg’s music scene. At its center is Ensemble 404 from the Hamburg University of Music and Drama (HfMT). For this occasion\, a video wall will be specially installed in the Friedrich-Ebert-Halle to highlight the synergy between sound and image.\nThe program ranges from intimate solo pieces with computer support to complex ensemble compositions and large-scale video works. \n  \nProgram Overview\nFantasy for Viola and Computer\nRichard Dudas \nNeuro Translation Engine\nVincenzo Russo \nClimate II for piano and computer \nRikako Kabashima \nWind Blown Rain\nMara Helmuth\, Esther Lamneck and Alfonso Belfiore \nDelicate Anticipation\nKotoka Suzuki \nAir-Carving Bamboo\nYu Chung Tseng \n  \nAbout the pieces & artists\nRichard Dudas: Fantasy for Viola and Computer\nThis work for solo viola and real-time audio processing in Max is a composed extension of some prior improvisational works using Max. It was written in part as an exploration of Bohlen-Pierce tuning (in the electronics)\, which divides the perfect twelfth into thirteen unequal justly-tuned steps. The viola part is pitted against this\, performing in standard twelve-unequal-steps-to-the-octave tuning\, juxtaposing and combining several different musical fragments\, each with its own character and mood. All sounds in the electronics are live: they are derived from the sounds of the on-stage violist. Max audio processing includes formant filtering to provide a vocal quality to the transposed and resonated viola sounds. \nAbout the artist\nRichard Dudas holds degrees in Music Composition from The Peabody Conservatory of Music of the Johns Hopkins University\, and from The University of California\, Berkeley. He additionally studied at the Franz Liszt Academy of Music in Budapest\, Hungary and the National Regional Conservatory of Nice\, France. In addition to composing music for acoustic instruments\, he has been actively involved with music technology since the late 1980s. As a computer musician\, he has taught courses at IRCAM\, and developed musical tools for Cycling ’74. Since 2007 he has been teaching music composition and computer music at Hanyang University in Seoul\, Korea. \n  \nVincenzo Russo: Neuro Translation Engine\nIn the future\, global societies remain marked by a multitude of languages\, dialects\, idiolects\, and diverse phonetic and cultural systems. Despite advances in AI-driven translation\, fundamental limits persist in the loss of emotional nuance\, imprecise interpretations\, and gaps between what is said and what is perceived. A team of computational linguists and neuroscientists develops an advanced artificial entity: the Neuro Translation Engine (NTE)\, capable of surpassing traditional textual or acoustic translation. The NTE does not translate words\, but the neural intentions behind language. It stimulates a specific area of the human brain\, the resonance cortex\, designed to receive universal neurosensory patterns. The result is a world where everyone can speak their native language while perfectly understanding others. Linguistic diversity is not diminished but enriched through mutual comprehension. The composition for ensemble and electronics illustrates how the NTE processes\, transforms\, and reconstructs communicative material. Through sound transformation techniques\, the acoustic material is dematerialized\, representing the machine’s “internal work”: the conversion of complex signals into a unified code. The final sound is entirely electronic\, devoid of recognizable references to the original ensemble. It forms a new language\, perceived as a pattern directly interpreted by the brain. \nAbout the artist\nVincenzo Russo (1995) holds a bachelor’s degree in Business Administration from the University of Naples “Parthenope.” He began his musical studies in Composition for Visual Media at the San Pietro a Majella Conservatory in Naples under the guidance of the late Maestro Lucio Lo Gatto. In July 2025\, he completed the second-level degree (Master’s degree) in Composition. Alongside his academic work\, he is active as a composer\, arranger\, and music producer\, working from his own recording studio. \n  \nRikako Kabashima: Climate II for piano and computer \nThis work was composed based on a variety of ideas inspired by climate change. In recent years\, translating insights from the natural world into my own compositions has become an important experiment in my creative practice.\nIn particular\, this piece draws inspiration from the rapid climate fluctuations caused by global warming\, a pressing issue worldwide. Each measure in the work is specified in seconds rather than traditional beats\, and there is no fixed meter. Within each measure\, rhythms are performed improvisationally according to the given duration.\nThis approach allows for different rhythms and nuances to emerge in every performance\, reflecting the ever-changing nature of the climate itself. \nAbout the artist\nRikako Kabashima was born in Kagoshima\, Japan\, in 1996. She began studying piano at the age of three and later pursued composition at Senzoku Gakuen College of Music in Tokyo. After completing her undergraduate studies in 2021\, she entered the master’s program in composition at Toho College of Music\, where she studied with Kazuro Mise and Hitomi Kaneko\, and explored computer music under the guidance of Takayuki Rai. She earned her master’s degree in March 2025.\nHer works have been selected at international festivals including the New York City Electroacoustic Music Festival (NYCEMF) in 2023\, the International Computer Music Conference (ICMC) in 2023\, 2024\, and 2025. \n  \nMara Helmuth\, Esther Lamneck and Alfonso Belfiore: Wind Blown Rain\nWind Blown Rain was inspired by natural processes and forces involving water. Water metamorphoses between many opposing states: from a gentle drizzle to a stormy downpour\, from a tiny droplet to a crashing ocean. Life on earth is dependent on water\, and also at its mercy. This piece focuses mainly on the transformed sounds of rain\, and its reflections in the tárogató sound. Samples were recorded in Venice and Ascea\, Italy. The music was composed in Italy in the summer of 2025 at Wassard Elea Artist’s residency in Ascea by a computer music composer and a performer/real time composer. While most of our previous collaborations have relied solely on the sound of the performer’s instrument for the computer part\, in this piece the instrumentalist interacts primarily with music created from natural recordings and their processed transformations. A third artist created the video part in response to the music from his own water-related video recordings. The video component of Wind Blown Rain is a visual meditation on the natural landscape\, filtered through the inner rhythm of rainfall. Created with images generated and modified using artificial intelligence\, the editing alternates slow-motion sequences\, crossfades\, and subtle variations to evoke a dilated sense of time. The environment\, immersed in rain\, transforms gradually\, suggesting a fragile balance between presence and dissolution. The visual work accompanies the music as a mental landscape—fluid and contemplative. \nAbout the artists\nMara Helmuth (b. 1957)\, internationally known computer music composer/researcher\, received a Guggenheim Fellowship in 2025. Her research explores sonification\, granular synthesis\, wireless sensor networks\, Internet2\, and RTcmix. She is Professor at College-Conservatory of Music\, University of Cincinnati\, where she received the George Rieveschl Award for Scholarly / Creative Works at in 2023. She served on the International Computer Music Association board of directors and as President. D.M.A.: Columbia Univ.\, earlier degrees: Univ. Ill. U-C. \nEsther Lamneck\, Clarinet and Tarogato\nThe New York Times calls Esther Lamneck\, “an astonishing virtuoso”She has appeared as a soloist with major orchestras\, with renowned chamber music artists and an international roster of musicians from the new music improvisation scene. http://www.estherlamneck.com/ \nAlfonso Belfiore is a composer and visual artist whose work explores the relationships between sound\, image\, movement\, and perception. Former professor of electronic music at the Conservatories of Florence and Padua\, he has collaborated with international institutions\, creating performances\, sound installations\, and multidisciplinary projects that merge musical innovation with digital art. His recent work investigates memory\, dreamlike space\, and the fragile line between reality and imagination. \n  \nKotoka Suzuki : Delicate Anticipation\nThis work is written as part of the series “In Praise of Shadows\,” inspired by Junichiro Tanizaki’s essay of the same title\, written at the birth of the modern era in imperial Japan. The essay describes how shadows and negative space are integral to traditional Japanese aesthetics in music\, architecture\, and food\, extending even to the design of everyday objects. As Tanizaki explains\, “We find beauty not in the thing itself but in the patterns of shadows\, the light and the darkness\, that one thing against another creates… Were it not for shadows\, there would be no beauty.” \nThe focus of the first of its sequence\, “In Praise of Shadows” for three paper players and electronics is placed on the collective loss of the tangible in our modern life\, analogues to how the excessive illumination of Edison’s modern light affect Japanese aesthetics and culture. Following this work\, “Orison” is composed for three music box players and electronics. The work is further inspired by the voices of children of war\, both from past and present\, speaking and singing about hope\, peace as well as sorrows arising from their personal experiences. These melodies\, presented as empty spaces on the music score\, reveal as they are fed through the music boxes. \nIn the third part of the sequence\, “Delicate Anticipation\,” written for a solo percussionist\, electronics\, and lights\, shadow is the central focus\, honouring the “patterns of shadows\, the light and the darkness\, that one thing against another creates”. Positioned behind the scrim\, the percussionist is only visible as a shadow while performing with lights and instruments primarily of metal and skin\, manipulating patterns of carefully choreographed shadows. The title derives from the English translation of the essay\, which describes the sensation of gazing at the silent liquid in the dark depths of a Japanese lacquerware bowl. As Tanizaki writes\, “What lies within the darkness one cannot distinguish…. …the fragrance carried upon the vapor brings a delicate anticipation.” \nAbout the artists\nKotoka Suzuki’s work engages deeply with the visual\, conceiving of sound as a physical form to be manipulated through the sculptural practice of composition. Artists such as the Arditti Quartet\, Eighth Blackbird\, Nouvel Ensemble Moderne\, and Mendelssohn Chamber Orchestra (Leipzig) have featured her work internationally through numerous venues and broadcasts\, including BBC Radio 3\, Schweizer Radio\, Lucerne Festival\, Heroin of Sound Festival\, Ultraschall\, and ZKM Media Museum. Suzuki is currently an Associate Professor at the University of Toronto. \nMichael Murphy is a Chinese-Canadian percussionist praised by The New York Times\, Opera Canada\, and The Herald. He has toured across North America\, Europe\, Scandinavia\, and Asia\, performing with ensembles including the Toronto Symphony Orchestra\, the National Ballet of Canada Orchestra\, and Philharmonisches Orchester Freiburg. A leading advocate for new music\, he has premiered concertos by Alice Ping Yee Ho\, Liam Ritz\, and Bob Becker and champions contemporary repertoire internationally. \n  \nYu Chung Tseng: Air-Carving Bamboo \n“Air-Carving Bamboo Music” premiered at the 2025 C-LAB Sound Arts Festival_DIVERSONICS . This work is an Acousmatic / electroacoustic music. The material comes from the composer’s field recordings of bamboo colliding on the shores of Emei Lake in his hometown of Hsinchu County in Taiwan. Through editing and transformation using DAW software\, and incorporating feedback material from AI Somax 2 on some of the bamboo collision rhythms\, the work was finally organized into an electroacoustic music piece.\nIn terms of performance style\, the composer wanted to differentiate themselves from traditional purely played electroacoustic music\, creating a synesthetic aesthetic experience for both the ears and eyes\, and letting electroacoustic music visible .\nThe composer invited percussionist Hsieh Yi-chieh to wave glow sticks in the dark\, as if drawing out or sculpting the electroacoustic music in air\, a technique akin to “grabbing music from a distance.” This presentation method\, besides giving electroacoustic music a performative quality\, greatly enhances the visual appeal\, auditory appeal\, and sonic dramatic tension of the performance. Postscript: Having composed electroacoustic music for more than 2 decades\, the composer occasionally wants to dabble in this area\, slightly transcending the aesthetic/philosophical view of “sound-only/purely auditory” in Acousmatic / electroacoustic music listening. \nAbout the artist\nYu-Chung Tseng\, receiving his DMA from UNT in Texas\, is a professor of electronic music composition and serves as the director of multi-channel Sound Lab at Institute of Music at National Yang Ming Chiao Tung University(NYCU) in Taiwan. \nHis music\, written for both acoustic and electronic media\, has been recognized with selection/awards from Pierre Schaeffer International Computer Music Competition (1st Prize/2003)\, Città di Udine International Contemporary Music Competition\, Musica Nova (First Prize/2010)\, Metamorphoses\, International Computer Music Conference(ICMC\, Best Music Award/2011/2015/2022)\,Taukay Edizioni Musicali call for Acousmatic Music(Winner/2019)\, and RMN Classical Electroacoustic call for work(Winner/2023)\,Polish International Electroacoustic Music Competition (Finalist/2023)\, KLANG International Acousmatic Composition Competition(Second Prize/2023) \, and Musica Nova (First Prize/2010). \n 
URL:http://icmc2026.ligeti-zentrum.de/event/concert-1b/
LOCATION:Friedrich-Ebert-Halle\, Alter Postweg 34\, Hamburg\, 21075\, Germany
CATEGORIES:11-05,Concert,Music
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260511T213000
DTEND;TZID=Europe/Amsterdam:20260511T233000
DTSTAMP:20260429T121824
CREATED:20260421T145800Z
LAST-MODIFIED:20260423T185733Z
UID:10000067-1778535000-1778542200@icmc2026.ligeti-zentrum.de
SUMMARY:Club Concert 1C
DESCRIPTION:Immerse yourself in a 20.8-channel sound world: in the Production Lab of the Ligeti Center\, neural synthesis\, artificial intelligence\, and interactive visuals merge into an immersive live experience. International artists present innovative prototypes—from AI-augmented string instruments to dynamic graphic scores. \n  \nProgram Overview\nZwischenheit \nRiccardo Ancona \nKnitting\nBrian Lindgren \nSonic Memories: A Live Coding Performance with Machine-Learned Sound Fragments\nRiccardo Mazza \nGradient Noise: Animated Scores with Corresponding Data Streams\nJohn C.S. Keston \nFluid Ontologies\nNicola Leonard Hein and Viola Yip \nOn The Edge\nKasey Pocius \nScarittera – Subterranean Eruptions of Sonic Memory\nDanilo Randazzo \n\n\n  \nAbout the pieces & artists\nRiccardo Ancona: Zwischenzeit \n\nContemporary neural audio research frames “music understanding” as a computational task. What does it mean for a machine to listen and understand a sonic context? Zwischenheit (2025) is an audiovisual performance that aims at finding a speculative\, empirical\, situated answer. The projection shows the performer having an improvisational dialogue with an algorithmic system composed of an audio captioner and a local language model. While the sound piece unfolds\, it reveals a complex scenario made of overlapping soundscapes. The language model is prompted to interpret the music as it flows\, trying to provide a nuanced understanding of the sonic situation. The human performer\, on the other hand\, is both inquisitive and reflective: at which threshold does the language model begin to appear as an agent of mystification? What does agency without consciousness reveal about listening? The outcomes of the dialogue change at every performance\, as there is a certain degree of stochasticity in the model’s replies\, but they always point at critical aspects of sonic hermeneutics and computational cognition. Embodiment\, contingency\, and situatedness emerge as essential characteristics of human listening that contemporary neural networks cannot embed. Zwischenheit is thus an attempt at investigating the performative possibilities that emerge at the intersection between post-acousmatic music\, music information retrieval\, and generative AI through an analytical self-reflection. \nAbout the artist\nRiccardo Ancona is a sound artist and PhD researcher in musicology of algorithmic music at the University of Bologna. He studied at CREA (Frosinone) and at the Institute of Sonology (Den Haag)\, where he specialized in algorithmic improvisation. His research focuses on computational aesthetics\, archival study of computer music\, and the sociology of neural audio technologies. He also curates Miniature Recs. \n  \nBrian Lindgren: Knitting \nKnitting is a new work for the EV\, an augmented bowed string instrument that integrates IRCAM’s RAVE (Realtime Audio Variational autoEncoder) neural synthesis model. The composition explores how machine learning can extend the timbral vocabulary of a traditional gestural practice—not by imposing external sonic material\, but by folding the instrument’s own acoustic identity back through a neural lens. \nThe EV combines a 3D-printed body with four infrared optical pickups whose signals are processed by a Bela board and transmitted to a laptop running Pure Data. Each string controls an independent synthesis engine comprising convolution\, physical modeling\, granular processing\, reverb\, and ambisonic spatialization. The recent addition of RAVE introduces a self-referential pathway: the model was trained on four hours of the EV’s own recordings\, creating a system that listens to itself through learned representations of its sonic history. \nCentral to this integration is a control strategy that maps performance descriptors—fundamental frequency\, amplitude\, and spectral centroid—to specific dimensions of the model’s eight-dimensional latent space. By constraining each modulation source to a single latent dimension\, the relationship between gesture and neural response becomes legible: a shift in bow pressure or position translates into a navigable timbral trajectory rather than an opaque transformation. This approach distinguishes the EV from other RAVE-integrated instruments\, which often emphasize loop-based or tabletop interfaces rather than continuous bowed-string control. \nKnitting treats this latent space as a landscape of sonic possibility\, each dimension a potential resonance between physical gesture and synthesized response. The compositional process is less one of arranging fixed materials than of cultivating emergent textures—drawing out sonic filaments\, crossing and interlacing them\, balancing tensions across the tapestry. The neural model functions as a meta-resonator: a parallel pathway that refracts the instrument’s timbral identity through an alternate causal route\, revealing aspects of its sound that remain latent in conventional electroacoustic processing. \nThe work demonstrates how neural synthesis can be embedded within a hybrid instrument ecology\, extending expression beyond pitch and amplitude to make performance descriptors direct agents of timbral transformation. By grounding latent navigation in the acoustic features of bowed-string technique\, Knitting positions machine learning not as a replacement for embodied practice but as an expansion of its expressive range. \nAbout the artist\nBrian Lindgren (1983) is a composer\, researcher\, violist\, and instrument builder whose work explores the convergence of acoustic performance and digital synthesis through the EV\, a hybrid string instrument integrating lutherie and embedded computing. \nHis compositions and research have been featured at the International Computer Music Conference (ICMC)\, New Interfaces for Musical Expression (NIME) conference\, Conference on Neural Information Processing Systems (NeurIPS)\, Society for Electro-Acoustic Music in the United States (SEAMUS)\, IRCAM Forum\, and International Conference on Auditory Display (ICAD)\, as well as published in Organised Sound. His work has been performed by ensembles including HYPERCUBE\, LINÜ\, Popebama\, and Tokyo Gen’on Project. \nThe EV was a finalist in the 2026 Guthman Musical Instrument Competition and used to compose ‘two tales from the shadows of the grid’ which won first place at the IEEE Big Data 2025 3rd Workshop on AI Music Generation Competition. \nLindgren holds an MFA in Sonic Arts from Brooklyn College (Subotnick\, Geers\, Gimbrone)\, a BA from the Eastman School of Music (Graham)\, and is pursuing a PhD at the University of Virginia (Burtner). \n  \nRiccardo Mazza: Sonic Memories: A Live Coding Performance with Machine-Learned Sound Fragments \nDrawing from Henri Bergson’s concept of *durée* and Deleuze’s rhizomatic models\, “Sonic Memories” reimagines memory not as a linear chronological archive\, but as a stratified field of coexisting planes. In this live coding performance\, autobiographical sound fragments—from mechanical gears to lagoon soundscapes and fragile voices—are liberated from their timeline and reorganized by an autoencoder into a non-hierarchical\, navigable map. \nThe performance begins with the simple act of loading a personal audio file—a field recording from a journey\, a voice memo\, a musical fragment—into a computational system that immediately begins to analyze and reorganize these sonic memories according to its own logic. \nOn stage\, the audience sees everything: the code acting in real-time\, a visual map where memories become points in space\, oscilloscopes showing the transformation of sound waves. This transparency is essential—there is no mystification of the technological process\, but rather an invitation to witness the negotiation between human remembering and algorithmic interpretation. \nThe performer navigates this latent space using SuperCollider and FluCoMa\, triggering both the original “concrete” traces and their AI-generated “distorted echoes.” The algorithm serves not as an autonomous agent\, but as a refracting lens\, forcing the performer to negotiate between faithful recall and neural hallucination. The result is a fragile dialogue between the fixity of the past and the malleability of the present\, exploring how computational tools can actualize memory as a living\, reconstructive act. \nThe work asks: How do we perform memory in an age of machine learning? Not by having machines remember for us\, but by creating dialogues with computational systems that reorganize our experiences according to their own logic\, forcing us to rediscover our own histories through unfamiliar maps. \nAbout the artist\nRiccardo Mazza (Turin 1963). Composer\, multimedia artist\, and faculty member at the Scuola di Alto Perfezionamento Musicale di Saluzzo. He collaborates with SMET (Electronic Music School) at the Conservatorio di Torino and the Conservatorio Ghedini in Cuneo\, and is internationally recognized for his research in psychoacoustics and spatial audio.\nIn 1997 he began a collaboration with Franco Battiato\, focusing on new technologies for sound. Between 1999–2000 he created the Renaissance SFX library\, the first Dolby Surround encoded spatial effects and field recording collection for cinema and television. Later developed SoundBuilder\, software for object-based surround design presented at AES 2003 in San Francisco\, which anticipated Dolby Atmos.\nHe founded Interactive Sound in 2001\, a research studio dedicated to multimedia exhibitions and immersive installations\, and in 2003 patented a psychoacoustic model of “sleep waves.” With Laura Pol\, he co-founded Project-TO (2015)\, an electronic and visual project that has released four albums and appeared at major festivals including TFF\, TJF\, Robot\, Share Festival.\nSince 2018\, he directs Experimental Studios in Turin\, one of Europe’s leading Dolby Atmos recording facilities. His current project Sonic Earth explores environmental sonification and algorithmic composition\, and has been presented internationally at ICMC 2025 in Boston\, FARM/SPLASH 2026 in Singapore\, SBCM 2025 (Brazil)\, IEEE 2025 (L’Aquila). \n  \nJohn C.S. Keston Gradient Noise: Animated Scores with Corresponding Data Streams\nSince 2019 I have been composing animated graphic scores for ensembles and soloists. These generative works are projected for both the performers and audience to experience. Custom software runs during the performance to create the computer graphics and geometric forms. Rules are established on how the forms are read\, but improvisation and the emotional response of the performer still play an integral part in each piece. Fixed media of this work does not suffice because it lacks the realtime\, generative\, and participatory aspects that create surprise and challenges for the performers. \nMore recently I began composing scores that not only generate animated visuals\, but also stream corresponding MIDI data that impacts the timbre and signal processing of the electronic instruments used by the performers. The instruments are either hardware based synthesizers or virtual instruments within a DAW such as Ableton Live. One of my recent compositions applies these streams of data to four layers of FM synthesis engines running within the Dirtywave M8\, a technically advanced\, modern\, hardware tracker. \nMy newest work in progress\, Gradient Noise\, translates values generated by the Perlin noise algorithm into independent layers of seamless loops repeating at variable intervals. These loops are visualised as geometric forms\, abstract visualisations\, and evolving structures. The data generated is innovative because although aleatoric\, the values can be tuned to range between slowly moving gradients or rapid\, angular forms. When the sound and visuals are synchronized the performer responds not only to the animation but also to the changes in the timbre of their instruments. \nThe debut of Gradient Noise will address the themes of Innovation\, Translation\, and Participation by rethinking the relationships between musicians and machines. By translating the properties of n-dimensional Perlin noise into a musical language\, the piece presents a unified ecosystem with coordinated timbres and geometric forms. The innovation lies in generating a living environment that requires active participation and improvisation in contrast to static notation. Ultimately\, the work presents a contemporary model for computer music where the performer does not simply follow a score\, but negotiates a path through a responsive\, multi-sensory experience. \nAbout the artist\nJohn C.S. Keston is an award winning transdisciplinary artist reimagining how music\, video art\, and computer science intersect. His work both questions and embraces his backgrounds in music technology\, software development\, and improvisation leading him toward unconventional compositions that convey a spirit of discovery and exploration through the use of graphic scores\, chance and generative techniques\, analog and digital synthesis\, experimental sound design\, signal processing\, and acoustic piano. Performers are empowered to use their phonomnesis\, or sonic imaginations\, while contributing to his collaborative work. Keston founded the sound design resource\, AudioCookbook.org\, where you will find articles and documentation about his projects and research. \nJohn has spoken\, performed\, or exhibited original work at SEAMUS (2025)\, Radical Futures (2024)\, New Interfaces for Musical Expression (NIME 2022)\, the International Computer Music Conference (ICMC 2022)\, the International Digital Media Arts Conference (iDMAa 2022)\, International Sound in Science Technology and the Arts (ISSTA 2017-2019)\, Northern Spark (2011-2017)\, the Weisman Art Museum\, the Montreal Jazz Festival\, the Walker Art Center\, the Minnesota Institute of Art\, the Eyeo Festival\, INST-INT\, Echofluxx (Prague)\, and Moogfest. In 2017 he was commissioned by the Walker Art Center to compose music for former Merce Cunningham. He has appeared on more than a dozen albums\, solo albums\, and collaborative works. \nNicola Leonard Hein and Viola Yip: Fluid Ontologies\nIn “Fluid Ontologies”\, Transsonic (Nicola Leonard Hein and Viola Yip) continues to expand their intermedial artistic practice in performances. For this project\, they developed their laser feedback instruments\, using lasers as sound sources and solar panel microphones. With the incorporation of multichannel spatialization\, Transsonic extends the spatial dimensions\, sonically and visually\, creating a unique audiovisual experience. The project explores and defines new concepts of the instrumentality of light in audio circuits\, bringing together space\, bodies\, and instruments into a dynamic feedback system. \nAbout the artists\nDr. Nicola L. Hein is a sound artist\, guitarist\, composer\, researcher\, programmer\, and professor of Sound Arts and Creative Music Technology at the University of Music Lübeck.\nHe works with A.I.-assisted human-machine interaction\, postdigital lutherie\, intermedia\, sound installations\, augmented reality\, network music\,and spatial audio. His works have been realised in more than 30 countries\, at festivals such as MaerzMusik Festival\, Sonica Festival\, Experimental Intermedia etc. \nDr. Viola Yip is an experimental performer\, sound artist and instrument builder.\nHer work have been presented and supported by places such as Stanford University\, UC Berkeley\, Harvard University\, Cycling ‘74 Expo\, Hong Kong Arts Center\, Academy of Media Arts Cologne\, Academy of the Arts Berlin\, KTH Royal Institute of Technology Sweden\, Elektronmusikstudion EMS Stockholm\, NOTAM Oslo\, Arter Museum Istanbul\, Serralves Museum of Contemporary Arts Porto and Pinakothek der Moderne in Münich. \nviolayip.com \n  \nKasey Pocius: On The Edge \nOn the Edge is an audiovisual work for video\, T-Stick and surround sound. This audiovisual work explores sounds and images of objects often on the edges of perception our perceptions\, as well as processing and results from edge cases in musical algorithms and technology. \nThe piece consists of four interlayered vignettes\, exploring the behaviour and textural qualities of various edge and peak detection algorithms to create the fixed media. These files are then used for the corpus for the granular synthesis controlled by the T-Stick. The gestural data from the T-Stick is sent from Max to Ossia\, where it is used to manipulate the treatment of the video clips in real-time. \nThe technical aspects of the work consist of a fixed-media ambisonic file\, with real-time manipulation of video clips (in Ossia Score) and multichannel granular synthesis (in Max) controlled by the T-Stick. \nAbout the artist\nKasey Pocius is a gender-fluid intermedia artist and researcher based in Montreal\, teaching at Concordia and active with CIRMMT\, IDMIL\, LePARC\, and GRMS. They create electroacoustic and audiovisual works that explore interactive electronics\, spatial sound and collaborative improvisation\, with pieces programmed globally from DIY spaces to Harvard. \n  \n\n\n\n 
URL:http://icmc2026.ligeti-zentrum.de/event/club-concert-1c/
LOCATION:ligeti center\, Production Lab (10th floor)\, Veritaskai 1\, Hamburg\, 21079\, Germany
CATEGORIES:11-05,Club Concert,Music
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260512T090000
DTEND;TZID=Europe/Amsterdam:20260512T103000
DTSTAMP:20260429T121824
CREATED:20260415T131848Z
LAST-MODIFIED:20260427T135840Z
UID:10000081-1778576400-1778581800@icmc2026.ligeti-zentrum.de
SUMMARY:Paper Session 3a: Music Notation & Representation I
DESCRIPTION:Three papers will be presented and discussed:\n  \nTianze Zhang\, Shingyui He and Lei Xuan\, “CNN-BiLSTM Hybrid Model with Physical Constraints for Automatic Piano Fingering Generation”\nPiano fingering is a pivotal technique that piano learners must master. To address the difficulties in the application and arrangement of fingering during practice and early stages of learning\, this paper proposes an artificial intelligence hybrid model based on Bidirectional Long Short-Term Memory (BiLSTM) network\, Convolutional Neural Network(CNN) and Attention\, with the aim of automatically generating piano fingerings. The model extracts physical features\, including spatial\, temporal\, hand motion\, and fingering information\, and integrates biomechanics constraints during neural network training for the first time. Based on the aforementioned algorithms\, the model achieved a good result. This innovative methodology enhances predictive performance by accurately capturing the complex\nphysical interactions inherent in piano fingering. This paper also compares the model with fingerings generated by other algorithms to verify the reasonableness and effectiveness of the hybrid model in piano fingering prediction. In conclusion\, the model can efficiently and conveniently provide fingering support for piano learners\, and has strong application prospects and practical value. \n\nJuan Carlos Vasquez and Zhonghao Chen: “Recursive Radiance: Multimedia Interpretations of Traditional Chinese Aesthetics”\n\nThis paper presents Recursive Radiance\, a multidisciplinary artwork integrating traditional Chinese practices with contemporary technologies through parallel sonic and visual implementations. The project pairs a four-channel acousmatic composition with an installation of graphic scores\ninspired by the Jianzi notation system. The sonic component transforms improvisations based on the traditional guqin piece ”Cai Zhen You” (Wandering in True Essence) through fab synthesis\, spatial diffusion\, and electronic processing. The composition employs chaotic attractors for spatial movement\, creating an immersive soundscape that embodies Daoist principles of fluidity and transformation. The visual component features a series of hanging scrolls and fragments functioning as both notation and artistic extension. These graphic scores emerge from a hybrid methodology combining traditional ink boxes with tensioned strings\, cyanotype printing\, 3D environmental scans\, and AI-generated imagery created through LoRA models trained on interpretative readings of the Guqin’s notation system. Recursive Radiance functions as both a hybrid physical-digital installation and a framework for cultural preservation.\nThis paper documents the completed musical composition\, graphic scores\, and conceptual approach to public engagement through immersive multimedia. Our research demonstrates how computational tools and engineering techniques can support artistic expression while preserving cultural heritage\, offering new pathways for audience interaction with traditional art forms through contemporary multimedia experiences.\nRob Canning: “Scores That Run: Graphic Notation with Embedded Performance Semantics”\nThis paper presents an approach to digital graphic notation in which performance semantics are embedded directly into the visual surface of the score. Working in standard SVG and authored entirely in Inkscape\, the score is composed as a graphic–semantic document: visual elements carry lightweight cue structures encoded in their identifiers\, and these cues are executed in real time by a browser-based runtime. The score therefore functions simultaneously as image\, temporal structure\,\nand performative interface\, without reliance on symbolic engraving or external playback systems.\nThe framework supports hybrid formal topologies\, including continuous scrolling trajectories\, page-based local environments\, and patterned navigation between sectional states. Animated motion fields provide shared gestural resources for ensemble coordination and may\noptionally drive live electronic processes\, enabling a unified grammar of acoustic and electronic gesture. All cue semantics—structural\, temporal\, gestural\, textual\, and media-based—are authored within the same executable layer as the notation\, so behaviour and interpretation\narise from a single surface. Because the system is based entirely on open web standards\, it enables a direct draw-and-perform workflow accessible to composers and performers without specialised technical infrastructure. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/paper-session-3a-music-notation-representation-i/
LOCATION:Hamburg University of Technology\, Building H\, Audimax 1\, Am Schwarzenberg-Campus 5\, Hamburg\, 21073\, Germany
CATEGORIES:12-05,Paper Session,Session
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260512T090000
DTEND;TZID=Europe/Amsterdam:20260512T223000
DTSTAMP:20260429T121824
CREATED:20260415T132257Z
LAST-MODIFIED:20260427T140803Z
UID:10000083-1778576400-1778625000@icmc2026.ligeti-zentrum.de
SUMMARY:Paper Session 3b: Physiological and Physical Foundations of Creative Systems I
DESCRIPTION:Three papers will be presented and discussed:\n  \nAmir Abbas Orouji\, Ayoub Banoushi and Gilberto Bernardes: “Vibrational Analysis of Traditional Persian Kamanche Sound Box: Experimental and Computational Investigation of Structural Modifications”\nThe kamanche\, a bowed spike fiddle central to Persian classical music\, features a spherical sound box covered with stretched animal skin and is played vertically on the performer’s lap. Despite acoustic similarities to the violin\, comprehensive research on kamanche acoustics remains limited. This study investigates the acoustic contribution of the sound box to resonance characteristics and tonal quality of the closed-back kamanche\, the most prevalent contemporary variant. The research combines COMSOL Multiphysics vibration simulation with experimental validation through impulse response frequency measurements. Investigated modifications include upper and lower hemisphere thickness variations and sound hole area reduction. Results demonstrate that upper hemisphere changes\, while preserving internal air volume\, substantially affect fundamental resonance patterns\, corroborating traditional luthier observations. This study also suggests that the vibration modes 4\,5\, and especially 7 might be good candidates for maximum contribution to the overall amplifica-\ntion of the string’s resonance and the overall sound of the instrument. \nNikolaus Knop: “Ponticello: An Interactive Conducting System for Mixed Music Performance”\nIn composed music that combines acoustic instruments with electronic processing or fixed media\, synchronizing acoustic and electronic layers remains a persistent challenge. The use of click tracks\, while technically effective\, significantly restricts the performers’ freedom to expressively shape musical time. This paper presents Ponticello\, a system that addresses the synchronization problem by inferring the ensemble’s tempo from a video stream of the conductor in real-time. Instead of the ensemble being beholden to a fixed digital click track\, the computer follows the flexible pulse indicated by the conductor\, which already functions as a shared temporal reference for the human performers. Although the idea of interactive conducting systems is not new —it has been researched since the 1970s — research has largely focused on applications that simulate instrumental performances based on MIDI scores\, which limits their applicability to the performance of mixed music. To support a broad range of compositional strategies for mixed music\, Ponticello instead models the electronic part as a timeline of temporally extended\, electronic processes whose playback tempo is continuously controlled by the conductor. The system has proven sufficiently reliable and accurate in rehearsal and concert settings across multiple conductors.\nRuby Crocker\, Lucas Ong and George Fazekas: “Emotion-Based Film Music Retrieval with Handcrafted and Deep Models”\nFilm music powerfully conveys emotion\, yet computational methods for retrieving film tracks that match a target emotional state remain underexplored. This paper presents two approaches for emotion-based film music retrieval using Valence–Arousal (V–A) representations. The models are evaluated on the FME-24 dataset\, which provides time-aligned participant-annotated V–A ratings for film music excerpts. The first approach applies k-Means to handcrafted audio features\, while the second uses a VaDE model with contrastive learning to align audio and V–A embeddings. Results show that both methods capture emotion-related structure\, with the deep model enabling more flexible\, fine-grained selection. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/paper-session-3b-physiological-physical-foundations-creative-solutions/
LOCATION:Hamburg University of Technology\, Building H\, Ditze Hörsaal (H 0.16)\, Am Schwarzenberg-Campus 5\, Hamburg\, 21073\, Germany
CATEGORIES:12-05,Paper Session,Session
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260512T103000
DTEND;TZID=Europe/Amsterdam:20260512T130000
DTSTAMP:20260429T121824
CREATED:20260421T115650Z
LAST-MODIFIED:20260423T174621Z
UID:10000163-1778581800-1778590800@icmc2026.ligeti-zentrum.de
SUMMARY:Workshop | Weiyi Dai: From Objects to Soundscapes: Participatory Spatial Composition through Data-Driven Multimodal Systems
DESCRIPTION:This workshop offers hands-on experience with Full House\, a multimodal soundscape system designed specifically for participatory spatial composition. Through a real time data driven architecture\, the system transforms the manipulation of physical objects into continuous spatial sound and visual processes.\nRather than teaching a fixed compositional style\, the workshop focuses on methodological thinking: sound is understood as an ongoing systemic behavior shaped by data\, space\, and interaction\, rather than a sequence of triggered musical events. It emphasizes the participatory creative cycle of experience–data–adjustment. Participants will learn how to integrate object based interaction\, mapping strategies\, and spatial audio technologies to construct intelligible\, inclusive\, and non repetitive soundscapes. \n  \nRequirements\nBasic computer literacy. To fully experience and understand the structural logic of the Full House system\, participants must bring their own laptop and headphones. Installation of TouchDesigner (TD) in advance is highly recommended; group on site installation will also be available. \n  \nWorkshop registration\nPlease register via Pretix in order to participate in the workshop. There are no additional costs.  \n  \nAbout the workshop facilitator\nWeiyi Dai is a composer\, sound artist\, and researcher working at the intersection of computer music\, spatial audio\, and multimodal interaction. He is an Associate Professor at the Shanghai Conservatory of Music\, School of Digital Media Art\, where his work focuses on soundscape systems\, participatory sound environments\, and the translation of artistic practice into technological platforms. His recent Projects explore object‐based interaction\, spatial sound rendering\, and data‐driven Sound libraries for immersive environments\, with applications in performance\, installation\, and education. His research has been presented in academic and artistic contexts related to computer music\, sound art\, and media technology. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/workshop-dai-weiyi-objects-soundscapes-participatory-spatial-composition/
LOCATION:Hamburg University of Technology\, Building H (H 0.02)\, Am Schwarzenberg-Campus 5\, Hamburg\, 21073\, Germany
CATEGORIES:12-05,Workshop
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260512T110000
DTEND;TZID=Europe/Amsterdam:20260512T123000
DTSTAMP:20260429T121824
CREATED:20260415T132858Z
LAST-MODIFIED:20260427T142022Z
UID:10000080-1778583600-1778589000@icmc2026.ligeti-zentrum.de
SUMMARY:Paper Session 4a: Music Notation & Representation II
DESCRIPTION:Three papers will be presented and discussed:\n  \nRodrigo Cadiz: “Composer-in-the-Loop Generation of Motivic Variants Using State-Space Models and Preference Learning”\nMost current approaches to symbolic music generation rely on large-scale deep learning models trained on massive corpora and operate exclusively on pitch and duration\, disregarding the articulations and dynamics that are fundamental to musical expression. We present a composer-in-the-loop system that addresses both limitations. A precomposed motive\, complete with pitch\, rhythm\, articulation\, and dynamics\, is modeled as a reference trajectory in a musically interpretable state space\, and variants are generated by sampling structured stochastic deviations inspired by Kalman filtering. A neural network modulates variance and structural edit probabilities based on com-\nposer preference\, learning from the composer’s own selections rather than from external data. Implemented as a simple browser-based application\, the system supports real-time audition and persistent model reuse. The approach represents a first step toward a compositional workflow in which larger musical structures are built by concatenating and varying short\, fully expressive motivic ideas. \nSolomiya Moroz\, Nicolo Merendino and Massimo Sterlino: “Co-Composing with Plants: Early Experiments in Bio-Responsive Score Design”\nThis paper presents a novel compositional system that positions plants as active\, agential collaborators. We developed a custom IoT sensor device to measure a plant’s biophysical state\, including electrical activity\, light\, and humidity\, and stream this data in real-time to a bespoke software environment. Unlike commercial bio-sonification devices that generate ambient sound\, our system translates biophysical fluctuations into the structural elements of a live musical score. The project is grounded in posthumanist and new materialist frameworks\, which embrace interspecies entanglement. Here\, collaboration is reconceived as a non-hierarchical network: the plant influences algorithmic score generation\, software mediates the data\, and human performers interpret the live-generated notation\, creating a continuous feedback loop. This approach challenges traditional paradigms of human-environment interaction\, proposing a relational and interdependent creative process. The system also serves as the technical foundation for an in-development chamber opera\, where musicians perform by interpreting a score generated in real time by their more-than-human partners.\nOrm Finnendahl\, “DSP and the Metalevel Clamps – an integrated environment for algorithmic composition and interactive realtime performance”\nIntegrating low level DSP operations and highlevel concepts for organizing musical material has been a long-standing repeated topic in the discussion of computer music. Although many capable DSP systems with advanced features concerning the organization\, visualisation and transformation of musical material on a higher level are in widespread use today\, they either suffer from an ongoing separation between the higher level and the DSP level or the lack of a satisfying infrastructure for the integration of both worlds. Clamps 1 is a Common Lisp Package built on top of Incudine for the DSP part\, CLOG for the GUI and other music related CL packages to create a unified platform\, intended\nto combine realtime performance\, algorithmic composition and notation in a single application language and memory space. It has been successfully used for a wide a range of applications from traditional compositional work and the production of Electroacoustic Music to Interactive Live-Performances. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/paper-session-4a-music-notation-representation-ii/
LOCATION:Hamburg University of Technology\, Building H\, Audimax 1\, Am Schwarzenberg-Campus 5\, Hamburg\, 21073\, Germany
CATEGORIES:12-05,Paper Session,Session
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260512T110000
DTEND;TZID=Europe/Amsterdam:20260512T123000
DTSTAMP:20260429T121824
CREATED:20260415T133949Z
LAST-MODIFIED:20260427T141403Z
UID:10000129-1778583600-1778589000@icmc2026.ligeti-zentrum.de
SUMMARY:Paper Session 4b: Physiological and Physical Foundations of Creative Systems II
DESCRIPTION:Three papers will be presented and discussed:\n  \nRolf Bader and Simon Linke: “Impulse Pattern Formulation (IPF) Brain and Larynx Model as a Co-Musician Sound Synthesis Method”\nA sound synthesis method is proposed as a variation of the Impulse Pattern Formulation (IPF) sound synthesis introduced before\, now combining an IPF Brain model previously proposed\, driven by a simple IPF of brain input stimulation and acting on a larynx IPF for vocalization. The resulting sounds produce timbre\, rhythm\, articulation\, and large-scale form with a single algorithm reminding on complex articulated vocalization of living being. A systematic investigation of the Brain IPF with the input IPF shows many kinds of articulations for converging\, bifurcating\, and chaotic IPF input\, but only the chaotic input has a high likeliness to end in a distinct sound. By varying the amount of excitatory vs. inhibitory neuron relations of the IPF Brain model\, realistic relations found in humans are found to have a wider distribution of articulatory possibilities. Varying the adaptation strength of the Brain IPF\, distinct sounds can often only be produced by certain values\, where some sounds can only be produced by no or a strong adaptation but not for medium adaptation strength\nvalues. Overall\, the relation between the Brain IPF output and its parameters are too complex to easily predict its output\, making this synthesis method a co-composer for a musician or composer displaying its ’own will’\, so a unique sound synthesis co-musician method. \nTim Ziemer: “Mel-Frequency Cepstral Coefficients and Recording Studio Features for the Analysis of Producer-Driven Music”\nIn music information retrieval\, Mel frequency cepstral co-efficients are a ubiquitous set of audio analysis features that has proven its value for practical tasks\, like automatic genre recognition or playlist generation. However\, in the recording studio practice\, a very different set of audio analysis tools is consulted. In this study\, we utilize audio analysis tools from the recording studio for house and techno music analysis\, and compare its discriminative power and its interpretability with Mel frequency cepstral coefficients. In a quantitative style classification task\, recording studio features perform slightly worse than Mel frequency cepstral coefficients. However\, they are much more explanatory when it comes to exploring differences between US and German music. The set of features is a promising tool for the research of producer-driven music.\nSimon Linke\, Rolf Bader and Robert Mores: “Designing responsive rhythms utilizing the Impulse Pattern Formulation (IPF)”\nImpulse Pattern Formulation (IPF) is an analytical modeling approach for synergetic systems motivated by research on musical instruments. It describes the nonlinear coupling of system components as the interaction between individually propagating\, exponentially damped impulse trains. Due to this general approach\, the IPF has been successfully applied to topics other than musical instruments and is hypothesized to be capable of modeling the entire process of musical perception and performance in the future. This work investigates how the IPF can be applied as a compositional tool that reproduces fundamental musical behavior by modelling the synchronization of musicians to an external rhythm. The derived model is systematically examined by analyzing its behavior when coupled to numerically designed and carefully controlled rhythmical beat sequences. Thus\, in the future\, the IPF model can be applied\, e.g.\, to replace drum machines and click tracks with more musical and creative solutions. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/paper-session-4b-physiological-and-physical-foundations-of-creative-systems-ii/
LOCATION:Hamburg University of Technology\, Building H\, Ditze Hörsaal (H 0.16)\, Am Schwarzenberg-Campus 5\, Hamburg\, 21073\, Germany
CATEGORIES:12-05,Paper Session,Session
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260512T110000
DTEND;TZID=Europe/Amsterdam:20260512T130000
DTSTAMP:20260429T121824
CREATED:20260421T122614Z
LAST-MODIFIED:20260423T185305Z
UID:10000168-1778583600-1778590800@icmc2026.ligeti-zentrum.de
SUMMARY:Workshop | Henry Windish\, Tristan Peng and Henrik von Coler: Working with MËSH: Advanced Tools and Strategies for Networked Instruments in Music and Sound Art
DESCRIPTION:This workshop introduces MESH\, a portable\, wireless system for distributed music performance and interactive installation. Designed as a flexible alternative to infrastructure-heavy networked ensembles\, MESH enables performers to create spatially distributed musical systems using compact\, self-contained nodes that communicate over a wireless network. \nParticipants will gain hands-on experience with the core concepts behind MESH\, including distributed sound synthesis\, Open Sound Control (OSC) based communication\, and decentralized performance design. We will build and perform with a distributed system\, gaining practical insight into the design and implementation of portable networked music environments. In addition to performance-oriented applications\, the workshop will also introduce approaches for audience interaction using computer vision techniques. The workshop is suitable for musicians\, composers\, and researchers interested in electroacoustic performance\, interactive systems\, and networked music practices. \n  \nRequirements\n\nNo prior knowledge\nNo materials needed – we will provide materials\n\n  \nWorkshop registration\nPlease register via Pretix in order to participate in the workshop. There are no additional costs.  \n  \nAbout the workshop facilitators\nHenry Windish is a graduate student in the School of Music at the Georgia Institute of Technology. His work focuses on computer music systems\, audio software development\, and collaborative tools for performance and education. He contributes to the design and implementation of networked performance platforms and supports student projects involving SuperCollider\, audio networking\, computer vision\, and interactive media. Henry has been with the MËSH project since it launched at Georgia Tech in 2024. Previously\, he studied electrical engineering at Washington University in St. Louis. \nTristan Peng (he/him) is a Music Technology PhD student at the Georgia Institute of Technology exploring interaction design\, spatial audio\, and sonification; previously studying at the Center for Computer Research in Music and Acoustics (CCRMA) in the Department of Music as well as the Department of Computer Science at Stanford University. A creative technologist and musician\, his work aims to democratize music through technology–creating accessible\, artful\, and interactive ways for people to experience sound. His current projects investigate how data can become a medium for participation and how immersive Audio spaces can evoke emotion and understanding in ways that traditional visualizations cannot. \nHenrik von Coler is a musician and researcher\, working at the intersection of art\, science and technology. In 2024 he founded the Lab for Interaction and Immersion (L42i) at Georgia Tech’s School of Music. Before that he was the director of the Electronic Music Studio at TU Berlin and head of the Computer Music Team at the Audio Communication Group. In his research and creative work\, Henrik has explored various aspects of electronic music and musical instruments. This includes interface design\, algorithms for sound generation and experimental concepts for composition and performance. Most of his projects treat space as an integral part of music. In 2017 he founded the Electronic Orchestra Charlottenburg – an ensemble of up to 12 electronic musicians – to explore music interaction on immersive loudspeaker systems. He has since worked on ways to enhance how musicians and audiences experience spatial music and sound art. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/workshop-henry-windish-et-al-working-with-mesh-advanced-tools-and-strategies-for-networked-instruments-in-music-and-sound-art/
LOCATION:Stellwerk Hamburg\, Hannoversche Straße 85\, Hamburg\, 21079\, Germany
CATEGORIES:12-05,Workshop
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260512T110000
DTEND;TZID=Europe/Amsterdam:20260512T173000
DTSTAMP:20260429T121824
CREATED:20260421T181755Z
LAST-MODIFIED:20260428T112006Z
UID:10000185-1778583600-1778607000@icmc2026.ligeti-zentrum.de
SUMMARY:Listening Room 2
DESCRIPTION:Fixed Media | Program Overview\nInner Line\nHyewon Kim \nA Portrait of Kwesi Brookins\nRodney Waschka \nBiomimicry\nChun-Han Huang \nDear Beginner\nVadim D. Genin \nEncircled\nAdam Stanovic \nmight have seen\nTakumi Harada \nSilence\nZiyu Pang \nTemporal Shards\nRay Tsai \nWhen An Android Becomes Obsolete\nGiancarlo Alfonso \n\n  \nAbout the pieces & artists\nHyewon Kim: Inner Line\nThe self is not something fixed\, but rather a moving line that is constructed through continuous interaction and emotional attunement with others. However\, in personality disorders\, this system of connection is damaged. Mirror neurons function dimly\, and the ability to share emotions by following another’s gaze or to attune feelings toward the same object becomes dulled. These individuals either fail to perceive the distance between ‘me’ and ‘you’ or become extremely conscious of it\, unable to live together in a shared world. ‘Inner Line’ explores these internal fractures and fragments the audience’s gaze and emotions. The piece is structured around an unstable relationship between live performers\, where accumulated interactions gradually alter the percussionist’s playing rather than producing immediate disruption. Spatially\, performers and sound sources are distributed across the stage and loudspeaker field\, preventing the audience from occupying a single\, optimal listening position. It does not control or design the audience’s emotional responses\, but rather leaves them to interpret and explain for themselves what emotions they experienced. And it reveals that this imperfect sensation itself is the essence of how we connect with others. Through this work\, rather than sharpening the boundaries of the inner self\, I aim to identify where our gaze has been fixed and to unravel that rigid thinking. \nAbout the artist\nHyewon Kim (b. June 1989) is a composer based in Seoul\, South Korea. She treats all vibrating objects as equal musical material\, working primarily with percussion and electroacoustic media. She also conceives and produces sound-based exhibitions in which audiences directly experience the inherent vibrations of materials. She earned her B.A. and M.A. in composition from Chugye University for the Arts\, studying with Sungjun Moon\, and is an active member of the Korean Electro-Acoustic Music Society (KEAMS). Her works have been presented at international and domestic festivals\, including ICMC. \n  \nRodney Waschka: A Portrait of Kwesi Brookins\nA Portrait of Kwesi Brookins is one of a series of computer music acousmatic portraits of artists\, composers\, and friends. Dr. Brookins\, a former professor of psychology and Africana Studies at North Carolina State University\, now serves as a Vice Provost at his alma mater\, Michigan State University. This sound portrait makes use of a recording of his rich\, sonorous voice saying his name and (then) position and naming his main area of work – child welfare. The piece also uses a public domain melody from Ghana\, a country Dr. Brookins has visited often as a scholar and pedagogue. \nAbout the artist\nRodney Waschka II is probably best known for his algorithmic compositions and his unusual operas. His music has been called “astonishing” and “strikingly charismatic” by Paris Transatlantic Magazine\, “a milestone in the repertoire” by Computer Music Journal\, “fluent and entertaining” by Musical Opinion of London\, and “oddly moving” by Journal Seamus. His mentors include Larry Austin\, Robert Ashley\, Paul Berg\, Clarence Barlow\, Konrad Boehmer\, Thomas Clark\, Charles Dodge\, and George Lewis. Waschka is Director and Professor of Arts Studies at North Carolina State University. \n  \nChun-Han Huang: Biomimicry\nBiomimicry is an electroacoustic composition constructed entirely from synthetic sound sources. Originating as a technical exploration within the Max programming environment\, the work emulates natural sonic phenomena—including weather patterns and animal vocalizations—without the use of field recordings or sampling. By reconstructing organic textures through digital synthesis\, the piece navigates the boundary between the artificial and the natural\, creating immersive soundscapes that range from ambient subtlety to chaotic intensity. \nAbout the artist\nChun-Han Huang (b. 2002) is a composer and sound artist based in Taiwan. He is currently a graduate student majoring in Computer Music at the Institute of Music\, National Yang Ming Chiao Tung University (NYCU). His creative practice focuses on electroacoustic composition and sound design\, exploring the intersection of organic sound sources and digital signal processing. \n  \nVadim D. Genin: Dear Beginner\nThe idea behind the piece is to create a relationship between the live performer and the fixed electronics\, as if the performer were relying on the signals the electronics were sending. All together\, it could resemble the process of getting to know the interface of some new\, unfamiliar equipment\, with the task becoming increasingly complex. The electronics are composed of sounds extracted from videos that have been accumulating in the smartphone’s memory for years. Moreover\, the selected samples are moments during which nothing happens in the video\, that is\, the most unnecessary garbage sounds. Finding sounds that are not interesting in their usual form and using them for musical purposes is an interesting challenge for the composer. \nAbout the artist\nVadim D. Genin. Born November 14\, 1993. Composer\, sound-artist\, PhD degree in Physics and Mathematics. Graduate of the Saratov State Conservatory and Saratov State University. Major projects are the video game opera “The World of Wondrous Rooms” and the documentary cantata “The Restorer”. Participant of festivals such as impuls (Austria)\, IDEA IWYC (Bulgaria)\, ARCo (France)\, CEME (Israel)\, ilSuono (Italy)\, Meridian (Romania)\, Teden Sodobne Glasbe Bled (Slovenia)\, Encontres de Compositors (Spain)\, reMusik.org (Russia)\, ICMC (USA). \n  \nAdam Stanovic: Encircled\nIn early 2025\, staff and students from the Sound and Music Programme\, LCC\, travelled to the London Wetlands Centre (part of the Wildfowl and Wetlands Trust (WWT)) to make recordings of the site and its surrounding areas. The project was inspired the long-running SoundLapse project at the Universidad Austral de Chile\, where recordings of the wetlands around Valdivia have inspired ecological\, educational\, and creative research. During the course our project\, we met with our counterparts in Chile and learned about their interests\, methods\, and research goals. We also met with staff and the London Wetlands Centre\, and heard about the rapid decline in global wetland environments and their plans to create 100\,000 hectares of sustainable wetlands in the UK. Arriving at the London Wetland Centre\, on the morning of the 11th Feb 2025\, I was immediately struck by the relative silence. For over an hour I had battled my way through the bustle of central London\, travelling by bus\, tube\, and train. And suddenly\, I was surrounded by stark winter trees. I could hear my feet crunching on the stony paths\, and a distant crow cawing. For a moment\, it felt like an oasis of calm. But as my ears acclimatised\, I realised that London\, the great metropolis\, was ever-present. I could hear it rumbling in the distance\, interrupted only by the roar of overhead planes bound for Heathrow. The more I tried to focus on the sounds of the centre itself\, the more aware I became of the monstrous city beyond… We were\, I felt\, encircled… Initially\, this piece set out to traverse city and Centre\, transitioning from the chaos of one to the calm of the other. As the piece developed\, however\, I started to realise that it is not simply the London wetlands that are encircled; although their geographies are vastly different\, most of the world’s wetlands are\, in one way or another\, equally encircled… \nAbout the artist\nAdam Stanović composes music with recorded sound. In recent years\, his music has drawn from both studio and location recordings\, using both digital and analogue technologies. To date\, he has won prizes\, residencies\, and mentions in over 40 international composition competitions\, had his music performed in over 700 international concerts\, and published works on 16 different albums\, including three solo albums (on the Sargasso and Empreintes DIGITALes record labels). Adam is Dean of Screen\, University of the Arts\, London. For more information\, visit: www.adamstanovic.com \n  \nTakumi Harada: might have seen\nThis piece is composed primarily of sounds obtained through field recording. Originally\, these sounds were recorded in a variety of locations and may appear to lack a clear contextual relationship. However\, through processes of transformation and manipulation\, their conventional sonic characteristics are dismantled. As a result\, latent commonalities inherent in the sound materials emerge\, generating connections among them and forming a unified continuity within the work as a whole. \nAbout the artist\nTakumi Harada. Born in 2000. From Tokyo\, Japan. Currently enrolled at Kunitachi College of Music. Began producing works in 2025. \n  \nZiyu Pang: Silence\nIn this age of information overload\, countless voices swirl around us. Yet rather than merely adding our own\, there are times when we simply wish to remain silent. Within that silence\, everything is expressed. This work employs minimal sound material\, dissecting the passage of time into numerous fragments of silence. It aims to strip away all superfluity\, reaching the most fundamental tranquillity of sound and spirit\, expressing the Eastern philosophical notion that ‘silence is golden’. \nAbout the artist\nZiyu Pang\, March 29\, 2005\, Third-year undergraduate student majoring in Music Sound Direction at the Wuhan Conservatory of Music \n  \nRay Tsai: Temporal Shards\nTemporal Shards is a short electroacoustic work that explores fragmented experiences of time within everyday perception. Through brief\, distorted frequencies shaped by compression\, reversal\, and abrupt interruption\, the piece unfolds fleeting moments emerging from a narrow temporal fissure. Temporal flow becomes unpredictable as acceleration\, suspension\, and sudden collapse coexist\, leaving behind transient perceptual traces that resist linear progression and stable structure. \nAbout the artist\nRay Tsai (Tsai Yi-Jui)\, born in Hsinchu and currently studying at National Yang Ming Chiao Tung University\, is a DJ\, music producer\, and new media artist. His work spans sound art\, electroacoustic music\, and video installation\, using experimental sonic structures to explore the relationship between technology and perception. Under the alias †Egothy†\, he is active in the underground electronic music scene\, performing noise\, deconstructed electronics\, and other avant-garde styles that shape sensory experiences oscillating between chaos and order. \n  \nGiancarlo Alfonso: When An Android Becomes Obsolete\nThis musical composition explores the concept of technological and computational obsolescence\, comparing it to a reflection on human obsolescence and human labor\, whether mechanical\, artistic\, or intellectual. In a social context increasingly oriented towards efficiency and automation\, human beings find themselves in a state of constant competitive rivalry with artificial systems designed to be faster\, tireless\, and free from the weaknesses that characterize human labor. The composer makes use of the figure of an android that has reached the end of its life cycle\, as a metaphor for this condition. Throughout the composition\, the machine’s final moments of activity are evoked\, during which states of confusion\, anger\, despair\, and acceptance emerge. These states do not represent real emotions\, but rather the result of calculations and assessments of its own operational status\, which contributes to humanizing the android by placing it on the same level as the human beings. From a compositional perspective\, the piece is mainly based on granulation techniques\, accompanied to a lesser extent by subtractive and additive synthesis. The sound material consists mainly of concrete mechanical and metallic sounds\, which are then granulated and processed\, along with cold\, more abstract\, unstable\, and unnatural synthetic sounds. The composition develops as a continuous friction between mechanical and rhythmic rigidity and sonic and timbral neuroticism\, suggesting the progressive deterioration of the physical and mental functions of the android protagonist. \nAbout the artist\nGiancarlo Alfonso\, (born on 14 June 2000) is a composer and electroacoustic music student at the Conservatorio \n 
URL:http://icmc2026.ligeti-zentrum.de/event/listening-room-2-2/
LOCATION:Hamburg University of Technology\, Building A (A 0.14)\, Am Schwarzenberg-Campus 1\, Hamburg\, 21073\, Germany
CATEGORIES:12-05,Listening Room,Music
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260512T110000
DTEND;TZID=Europe/Amsterdam:20260512T173000
DTSTAMP:20260429T121824
CREATED:20260421T184536Z
LAST-MODIFIED:20260428T110819Z
UID:10000180-1778583600-1778607000@icmc2026.ligeti-zentrum.de
SUMMARY:Listening Room 1
DESCRIPTION:Fixed Media | Program Overview\nSeething Field: Imprint\nSam Wells \nContours of Anxiety\nZihan Wang and Wenxin Zhou \nDreams of the Jailed Refugee\nRobert Sazdov \nflusso_sonoro_1\nSebastiano Naturali \nGalactic Railroad\nYunze Mu \nIdeale Landschaft Nr. 6\nClemens von Reusner \nIncarnations\nYoungjae Cho \nInformation Body Horror\nPrimrose Ohling \nJardín de Luz\nIván Ferrer-Orozco \nNon è un atlante di traiettorie algo-siderali\nAndrea Laudante\, Paolo Montella and Giuseppe Pisano \nNor’wester\nTeerath Majumder \nOcean Reflection\nYu Qin \nrain contained\, rain contains…\nWei Yang \nSAW\nGabriel Araújo \nWild Fruits: Epilogue\nJames Harley \nInside the metal plate\nRaul Masu and Francesco Ardan Dal Ri \n\nAbout the pieces & artists\nSam Wells: Seething Field: Imprint\nSeething Field: Imprint is a turbulent interplay of memory and resonance\, written for seventh-order ambisonics fixed media. The work unfolds through the filtering\, modulation\, distortion\, and reverberation of a time-stretched recording of Jack Kerouac. Using ambisonic impulse responses for the Chapel of the Four Chaplains\, located in the basement of Temple Performing Arts Center\, Seething Field: Imprint harnesses the Chapel’s reverberant and sonic characteristics\, a space dedicated to four chaplains who sacrificed their lives on the USS Dorchester—a ship Kerouac once served on but was recalled from before its tragic sinking. The source material for Seething Field is a brief recording of Kerouac speaking “The Ocean\,” time-stretched 512 times from about 1.5 seconds to over 10 minutes. This slow unfolding of speech provides the formal and harmonic structure of the work. The stretched recording was then recursively recorded through the Chapel’s reverb\, a process akin to Alvin Lucier’s I Am Sitting in a Room\, revealing and reinforcing the shared resonant frequencies of Kerouac’s voice and the Chapel dedicated to the Four Chaplains. The title draws from the closing lines of Kerouac’s The Sea is My Brother\, where ‘the sea stretched a seething field which grew darker as it merged with the lowering sky.’ Seething Field mirrors this seascape\, embodying the darkness\, expansiveness\, and tension of turbulent times\, past and present. \nAbout the artist\nSam Wells (Philadelphia) is a musician and artist whose work explores breath\, space\, and embodiment across acoustic\, electronic\, and multimedia forms. As trumpeter and improviser\, he has performed internationally with ensembles including Aeroidio\, Miller/Vidiksis/Wells Trio\, and SPLICE Ensemble. His compositions have been performed widely in the U.S. and abroad. Wells is a Cycling ’74 Max Certified Trainer and an Assistant Professor of Music Technology and Composition at Temple University. \n  \nZihan Wang and Wenxin Zhou: Contours of Anxiety\nThis work draws inspiration from my own experience and understanding of anxiety\, attempting to express it through immersive sonic work. I perceive this emotional state as an ever-shifting form: evoked\, diffused\, released\, and finally calmed. It should be clarified that this is not designed according to rigorous psychological science but rather leans towards an expression of my personal emotional experience. Throughout the piece\, characteristics of this psychological journey: suppression\, constraint\, and release are mapped onto frequency density\, timbre\, texture\, and the spatial tension between acoustic elements. From a compositional perspective\, timbre spatialisation functions as the primary technical and expressive strategy. This involves decomposing sound according to its spectral characteristics and distributing these components across distinct spatial locations (Normandeau\, 2009). Within this framework\, ambisonics operates as the spatialisation method\, whereby its manipulable parameters generate distinctive timbral and spatial effects. These elements collectively construct the work’s internal emotional narrative\, integrating spatial parameters as essential compositional materials rather than superimposed effects. \nAbout the artists\nZihan Wang (07/12/2000) is an electroacoustic music composer\, film composer\, and sonic artist. He is currently a post graduate research student at Monash University\,Melbourne\, Australia\, where his work investigates compositional strategies for ambisonics-based environments. His research engages with Robert Normandeau’s concept of timbre spatialisation and Denis Smalley’s theory of spectromorphology\, with a particular emphasis on timbre\, spatial articulation\, and electroacoustic composition. His creative practice includes fixed-media electroacoustic works\, sound installations\, animated score composition\, and film scoring. His work has been presented at venues and conferences including TENOR 2025 and the Melbourne International Film Festival (MIFF). Wenxin Zhou is a composer specializing in electroacoustic and interactive media music. She holds a Bachelor of Composition and Music Production from the Australian Institute of Music and graduated with a Distinction in the Master of Composition for Electroacoustic and Interactive Music from the University of Manchester. Her creative practice focuses on exploring the transformation and fusion between real-world sounds and electronic soundscapes. \nWenxin Zhou \n  \nRobert Sazdov: Dreams of the Jailed Refugee\n‘Dreams of the Jailed Refugee’ (2023-25) is the final work of the ‘Dreams of the Jailed’ trilogy. This final instalment extends the trilogy’s engagement with statelessness and incarceration\, attending to the psycho-emotional toll borne by refugees subjected to carceral regimes — both literal and systemic — under global structures of war\, famine\, and economic precarity. Composed for fixed media\, the work utilises the acousmatic form to foreground disembodied sonic presences\, suggesting the persistence of memory and agency even under conditions of profound erasure. The absence of visual referents invites the listener into a mediated interiority — a sound world shaped by fragmented dreams\, longing\, and dislocation. Dreams of the Jailed Refugee proposes a mode of listening that is politically charged and ethically attuned. It seeks to destabilise hegemonic narratives of migration by offering a counter-sonic space in which refugee subjectivities are not merely represented\, but sonically enacted. In doing so\, the work aligns with broader decolonial and posthumanist currents in contemporary sonic arts practice\, where listening becomes an act of recognition and resistance. \nAbout the artist\nRobert Sazdov is a composer\, music producer\, and academic. He is currently Associate Professor at the University of Technology of Sydney (UTS) in Music and Sound Design\, where he also served as Head of Music and Sound Design (2018-2024). Sazdov’s compositions and productions have received notable prizes and awards from various organizations and institutions\, including: Daegu International Computer Music Festival\, International Composition Competition Città di Udine\, ‘Pierre Schaeffer’ Competition\, Musica Nova Competition\, Sonic Arts Awards\, Bourges International Competition\, Just Plain Folks Music Awards\, and the Audio Engineering Society. Sazdov’s music has been released by Capstone Records\, Vox Novus\, Accademia Musicale Pescarese\, Society for Electroacoustic Music\, Australasian Computer Music Association\, Sonic Arts Awards and SoundLab Channel. He has undertaken residencies at the Erich-Thienhaus-Institue\, Detmold University (2012)\, The Sonic Lab\, Sonic Arts Research Centre\, Queen Mary University (2007)\, and at SPIRAL – University of Huddersfield (2023). He was also a Visiting Research Fellow at Applied Psychoacoustics Laboratory – University of Huddersfield (2023)\, Institute of Electronic Music and Acoustics – Graz (2023)\, and The Sonic Lab\, Sonic Arts Research Centre\, Queen Mary University (2023). \n  \nSebastiano Naturali: flusso_sonoro_1\nflusso_sonoro_1 is a fixed-media electroacoustic composition that explores sound as a fluid\, continuously transforming entity\, oscillating between density and transparency. The work invites the listener to perceive sound both as an uninterrupted flow and as a succession of interruptions\, turbulences\, and suspensions that shape its trajectory. The piece reflects on temporal perception and memory through the interaction between microsonic detail and large-scale form. Recorded and synthetic sound materials are intertwined and progressively transformed\, emphasizing the tension between natural sound qualities and electronic abstraction. Rhythmic structures derived from iterative transformations and masking processes generate evolving layers that gradually increase in complexity and density. A central section focuses on sustained textures in the mid–low frequency range\, combining stretched vocal layers and processed high-frequency elements to create a suspended sonic environment. In the final section\, earlier rhythmic materials re-emerge and are subjected to global processing techniques\, including digital silence and spectral degradation. The work was composed using Ableton Live and Max for Live tools. It is presented as a stereo fixed-media piece\, with a quadraphonic version also available. \nAbout the artists\nSebastiano Naturali (born 15 February 2006) is an Italian composer and guitarist working in the field of electroacoustic and electronic music. He is currently pursuing undergraduate studies in Electronic Music and Classical Guitar at the Conservatory of Potenza. His work focuses on sound transformation\, spatial audio\, and practice-based artistic research using digital music systems. \n  \nYunze Mu: Galactic Railroad\n“Merry meet\, merry part.” At the end of a 4-year relationship\, I started to think about why people meet if we’ll eventually separate and what the true happiness or the final destination of everyone is. I found no answer. However\, this piece is somehow the record of my thinking during that period. This piece was inspired by the book “Night on the Galactic Railroad” by Japanese author Kenji Miyazawa. In my imagination\, the Milky Way is full of steam trains. They meet\, run together\, and separate eventually. None of them knows where the destination is\, they just keep running toward somewhere\, restlessly. Does it matter if you know where the destination is? Maybe not. Are all the experiences more valuable than the end? Maybe so. The only thing I know is\, I’ll always keep running just like those steam trains\, no matter what. \nAbout the artist\nYunze Mu is a composer\, sound artist and music programmer based in Louisville\, Kentucky. He currently teaching at University of Louisville\, School of Music as Assistant Professor. He received a DMA (Doctor of Musical Arts) in Composition at the College-Conservatory of Music\, University of Cincinnati\, where he studies computer music with Mara Helmuth\, teaches introductory courses in electronic music\, and works on his web-based music application\, Web RTcmix. Mu holds a bachelor’s degree in music composition from Central Conservatory of Music\, Beijing\, China. His music\, papers\, and VR installations have been shown and performed at numerous events and conferences\, such as NIME\, ICMC\, SEAMUS\, NYC Electronic Music Festival\, and venues in China\, Poland\, France\, United States\, and Korea. \n  \nClemens von Reusner: Ideale Landschaft Nr. 6\nThe manifold real (sound)-landscapes have been themes in the arts again and again over the course of time. Special approaches can be found in so-called “ideal landscapes”\, namely in European landscape painting of the 17th and 18th centuries. The 8-channel electroacoustic composition “Ideal Landscape No. 6” is inspired by these constructed\, calm but non-real landscapes of European landscape painting as well as by an etching by the German artist N.N. It is the 6th sheet of his cycle “Variations in G”\, which has no title of its own. Although the composition is not about the “setting to music” of a graphic model\, there are structural similarities between the two works. The sound material is abstract sounds produced with synthesisers and as well calculated with Csound\, a programming language for sound synthesis\, which were created through additive and subtractive sound synthesis. \nAbout the artist\nClemens von Reusner is a composer based in Germany. His works of electroacoustic music and radiophonic audio pieces focus equally on purely electronically generated sounds as well as sounds found in special places and processed in the studio. He is a member of the “Academy of German Music Authors” and he has received national and international awards for his compositions. They are performed worldwide at renowned international festivals of contemporary music. \n  \nYoungjae Cho: Incarnations\nThis work is the first piece in a series that employs higher-order Ambisonics\, focusing on the creation of an immersive environment through 3D audio. The composition is based on field recordings captured using Ambisonic microphones of various orders\, which serve as the primary material for spatial composition. The work explores the relationship between recorded soundscapes and temporal contexts by constructing a narrative structure that connects imagined past events\, present bodily experiences\, and speculative future occurrences. Through this approach\, spatial sound is treated not only as an acoustic phenomenon but also as a medium for linking time\, memory\, and place within an immersive listening environment. \nAbout the artist\nYoungjae Cho (1990) is a composer based in Bremen\, Germany\, and Korea. His work includes solo and chamber music\, electroacoustic music\, and live electronics\, focusing on immersive spatial sound through multichannel audio systems. Presented at international festivals such as DEGEM\, ZKM\, ICMC\, and ORF Musikprotokoll\, his music received the Gold Award at the IEM & VDT Student 3D Audio Production Competition\, and he was an Artist in Residence at ICST Zurich. \n  \nPrimrose Ohling: Information Body Horror\nPrivileging patterns of information over its instantiation is a promise first conceived of within the field of cybernetics and popularized through science fiction. Is disembodiment truly liberatory? Information Body Horror (IBH) was written to leverage the experience of having a body that is abstracted\, debated\, and legislated. Lived experience is twisted to force rhetoric and justify legislation\, displacing and endangering individuals and communities. IBH started with a claim of self-expression and agency through an improvised recording on modular resulting in ‘lived material’. The artist finds that improvisation with any instrument is an embodying experience where their expression is in its purest form. How much can you alter recordings before they lose their original meaning? Should you alter them to begin with? The artist not only manipulates recordings but forces a layer of digital alterations. The use of sampling obscures source material in the meso timescale and a form of algorithmic micro-sampling through pitch shifting results in violent granular fractalizations. The act of sound design and composition becomes a reenactment of how external discourse overwrites lived reality. The stems are then mixed to 8 channels in Max/MSP reflecting the propagation of external discourse through institutional channels. The creative decisions in mixing are to ensure the abstracted material is engineered to outperform and silence the lived material. This stage is where IBH is codified and written to memory. Finally\, abstractions are instantiated through 8 speakers\, a setup largely reserved for professional environments\, hindering public engagement. However\, those who can listen will have differing experiences depending on where they are in the listening environment. The setup surrounds listeners\, it sonically reaches out and presses on them\, observing them. To call disembodiment in this case liberatory dismisses the lived and emboldens the abstracted material. I have found through this process that a separate disembodiment\, of self by self\, is impossible. It has only spoken to the complexity of selfhood. In the first section let the dense textures sink in as they swirl and oscillate in space. Secondly\, synthesized voices will call out to you from their digital void. In the final section\, sounds reflect ocean waves and wind\, natural patterns reclaiming space through digital noise. The textures eventually ease and lighten but this is not peace. It is endurance. The tides cycle. The winds change. They continue regardless. \nAbout the artist\nPrimrose Ohling b. 2002 is a musician\, multimedia artist\, and coder. She is drawn to rhythms\, textures\, and the dichotomy between improvisation and the precision of digital electronics. Her foundation is in jazz saxophone and improvisation. She continues to explore that side of her artistry\, letting it influence her work. She finds inspiration in reconstituting familiar sounds\, creating immersive and evolving soundscapes. Her music explores transformation\, inviting listeners into a fluid auditory experience. Recently\, her focus has been on modular synthesis\, where she utilizes digital modules and custom DSP algorithms\, adding further depth to her distinctive style. As a trans artist\, her work often engages with themes of embodiment\, bodily autonomy\, and the violence of abstraction. \n  \nIván Ferrer-Orozco: Jardín de Luz\nJardín de Luz (2021) is based exclusively on the Debris Project sound database\, comprising over 2\,000 samples. An algorithm using musical descriptors selects materials that are further developed through sampling and synthesis\, generating a new category of sounds termed hybrids. Conceived as a form of sonic gardening\, the work organises these materials within the acoustic space. Light operates as a metaphor for the listener’s disposition to listen\, and the garden as a heterotopic sonic space. \nAbout the artist\nIván Ferrer-Orozco (Mexico City\, 1976) is a composer\, electronic media performer and computer music designer. His music has been performed extensively in festivals and by ensembles from Mexico\, Spain\, Canada\, Argentina\, Ecuador\, Chile\, South Korea\, France\, Hong Kong\, Vietnam\, Japan\, USA\, Germany\, Ireland\, Portugal\, Italy\, and Cyprus. He has been artist in residence at: Akademie der Künste Berlin\, Schleswig-Holsteinisches Künstlerhaus\, Residencia de Estudiantes\, Camargo Foundation\, MacDowell Colony\, Djerassi\, CMMAS\, Hooyong Performing Arts Centre\, ARTos Foundation\, Ibermusicas\, I-Portonus\, Conseil des Arts et Lettres du Québec\, among others. As electronic media performer he performs as soloist and as sideman with artists and ensembles from Spain and abroad. In 2019 and 2024\, the Mexican government appointed him to the Sistema Nacional de Creadores de Arte\, a national programme that awards outstanding artists from all disciplines. He was a member of Neopercusion\, Madrid based contemporary ensemble\, currently he is a member of Vertixe Sonora Ensemble and Synergein Project. He has been part of the Forms of Culture Research Group at the Study Programme in Critical Museology\, Artistic Research Practices and Cultural Studies of the National Museum and Arts Centre Reina Sofia. He has been awarded with the 2021 Best Music Award of the International Computer Music Association. \n  \nAndrea Laudante\, Paolo Montella and Giuseppe Pisano: Non è un atlante di traiettorie algo-siderali\nThis piece is neither an exploration of movement nor a calculated map. Instead\, it invites the listener to experience a sonic drift\, propelled by precise mathematical rules. Large sound masses govern the flow\, turning slowly with heavy inertia\, while sharper\, faster sounds cut through the space\, leaving vivid acoustic traces. The resulting soundscape is a complex web of intersecting paths. Musical fragments pulse rhythmically\, creating a changing geometry of sound that expands and contracts around the audience. Originally composed in High Order Ambisonics\, this work was forged collectively—through shared practices\, exchanged sounds\, and the unpredictable alchemy of collaboration\, allowing the music to evolve in ways no single mind could anticipate. \nAbout the artists\ntotaleee is a trio of composers of acousmatic music and laptop performers consisting of Giuseppe Pisano-Riise (1990)\, Andrea Laudante (1993)\, and Paolo Montella (1986). In their composition work they use immersive audio technologies to create fictional environments of plausible and impossible nature. This is done through the use of multichannel synthesis techniques\, physical modeling of room acoustics\, field recordings\, and feedback loops. The trio debuted with their first piece ‘Non è un compendio di Etologia numerico-digitale’ in 2023 and since then their works have been played in many different contexts including ICMC (2023 Shenzhen\, 2024 Seoul)\, Ircam (Paris)\, Sonosfera (Pesaro)\, ACMC (Sydney)\, Prix CIME\, WOCMAT (Taipei) and many more. They have also received awards such as the first prize at ISAC 2024 (International Sonosfera Ambisonics Competition)\, the Teresa Rampazzi Award at CIM XXIV and a Distinction per Category at CIME 2023. totaleee is the first stable project to emerge from Napoli Totale Elettronica [NTE]: an open and fluid composers’ society that embraces the collective electroacoustic works produced by affiliated artists from and/or based in Naples. Connected to the NTE collective are the DIY portable loudspeaker array VOLTA and the festival for multimedia arts Marginale. \n  \nTeerath Majumder: Nor’wester\nThe Bengali new year (mid-April on the Gregorian calendar) invariably brings with it violent storms near the Bay of Bengal. We call them Kal Baishakhi. They can wreak havoc on people’s daily lives\, damage crops\, cause floods\, and displace people. The suffering is considerable. Yet\, somehow\, knowing that these cataclysmic events are inevitable helps generate acceptance. We know what nature has in store for us\, the destruction it will cause; we also know it will pass. It is all part of the cycle. During a particularly difficult time of my life when I was reconciling with several grave losses\, the Kal Baishakhi and our acceptance of it was inspiring. It helped me see the bigger picture beyond the carnage\, the impermanence of everything\, and the strength in us to grieve and overcome loss. Nor’wester is a depiction of not just the dynamics of a storm but also our tumultuous experience of it. It is gradual and sudden\, momentary and eternal\, stationary and chaotic. These are some of the qualities that have been expressed in this piece through electronic timbres and spatialization. The piece was written using a range of generative processes that gave rise to complex timbres. The sounds were modeled using wavetable and granular synthesizers along with careful parameter randomization. No audio samples were used. The piece also explores different combinations of polyrhythmic patterns that fit within a fixed cycle length. Moving frequently between these combinations often creates a disorienting effect while maintaining a grid-like rhythmic quality. The sounds then went through first-order ambisonic encoding (mono\, stereo and granular). Ambisonic effects such as delay\, reverb and compression were applied during mixing. The ambisonic master was then decoded for octaphonic playback. \nAbout the artist\nTeerath Majumder is a Bangladeshi composer and technologist who works in interactive and immersive media\, computer music\, and sound design. He questions socio-sonic dynamics that are often taken for granted and reimagines relationships between participants through technological mediation. In 2025\, he created Do Not Feed the Robots\, a participatory concert involving a range of “interactive objects” and automatons. In the same vein\, his 2022 work Space Within fostered collaboration between audience members and featured musicians using his “interactive objects.” His collaboration with Nicole Mitchell resulted in the immersive sound installation Mothership Calling (2021) that was exhibited at the Oakland Museum of California. He composed and designed sound for Qianru Li’s immersive multimedia piece A Shot in the Dark (2023) that explored Asian-American identity in the face of anti-Black police violence with reference to the shooting of Akai Gurley in 2014. His compositions have been performed by Hub New Music\, Transient Canvas\, and London Firebird Orchestra among other ensembles. He often collaborates with dancers and filmmakers in various capacities and produces genre-bending electronic music for his studio projects. \n  \nYu Qin: Ocean Reflection\nOcean Reflection is a 22-minute sonic journey exploring the ocean as a vast system of hidden energy operating on temporal scales far beyond human perception. Drawing on field recordings from the North Sea—including both oceanic soundscapes and offshore drilling infrastructure—the electronics function as a structural\, time-based layer that is organically coordinated with the music through the piece. Sustained harmonic fields and slow-form processes in music evoke the ocean’s apparent calm and depth\, while industrial sounds gradually surface\, revealing human intervention not as an immediate rupture but as a long-term disturbance embedded within marine systems. Ocean Reflection invites listeners to reflect on scale\, time\, and the asymmetry between human activity and ecological response. \nAbout the artist\nYu (Hayley) Qin is a composer and improviser\, currently a PhD candidate at UC Irvine\, whose work weaves music\, dance\, and digital technologies into immersive\, interdisciplinary experiences. Drawing inspiration from marine environments\, human psychology\, and neuroscience\, her creations explore hidden energies\, human-nature interplay\, and collective imagination. Her works have been performed across North America and East Asia. \n  \nWei Yang: rain contained\, rain contains…\n“rain contained\, rain contains…” is a fixed-media piece exploring the close relationship between everyday objects and nature. Made from sounds of bottles\, tubes\, and rain\, it invites listeners to discover sonic connections and containment between the profound and the ordinary. Bottles and resonant tubes act as vessels of containment\, symbolically setting boundaries. Yet\, by performing them\, their distinctive sounds—clinks\, taps\, and resonances—uncover hidden textures and melodic fragments that break through the physical. Rain\, a fundamental element\, unifies the soundscape. While shaping the acoustic environment\, the rain also consists of drops\, each of which can be contained and has its own unique sonic profile. Through careful transformation and juxtaposition\, the piece highlights the shared granular qualities that allow bottle and tube sounds to seamlessly transform into rain\, and vice versa\, blending the domestic and the natural. This sonic alchemy explores how these elements “find and contain each other\,” fostering an “ecological listening” that reveals the deep interconnectedness of our everyday objects and the natural world. Various signal-processing techniques were employed to achieve a wide range of sonic materials and spatial transformations\, including granular synthesis\, filtering\, spherical-angular decomposition/recomposition\, a custom spherical-cap order upmixer\, a custom reverb with feedback delay networks\, and more. \nAbout the artist\nWei Yang is a composer/sound artist from China. He works with different media\, through which he often contemplates the body’s role in sound production\, sound in space\, as well as the integration of various data from the performance environment (reverberation\, light\, etc.). Wei composes both instrumental and electronic music\, and often incorporates various sensors and physical computing to build performative systems that allow dynamic interaction among different actors within the system. His works have been performed internationally at various events\, including the Darmstadt Summer Festival\, Salzburg Music Festival\, BEAST Festival\, NUNC!\, ICMC\, ISAC Sonosfera\, Tomeistertagung\, ORF Musikprotokoll\, the San Francisco Tape Music Festival\, SEAMUS\, Espacious Sonores\, Festival Atemporánea\, Nucleo Música Nova SiMN\, Sound Image Festival\, and Ars Electronica. \n  \nGabriel Araújo: SAW\nA hyperrealistic space of bees\, motors\, and sawtooth waves. The piece focuses on the commonalities of these sounds and explores the constant transformation of materials\, between the natural and the artificial\, the real and the impossible\, the biological\, the mechanical\, and the fantastical. \nAbout the artist\nGabriel Araújo. Composer\, multimedia artist\, and educator whose work bridges ecological\, technological\, and cultural models through sound\, video\, and transmedia pieces. He is Assistant Professor of Music Technology at Texas A&M University – Central Texas. Gabriel studied composition with Paulo Guicheney at the Universidade Federal de Goiás (Brasil)\, and obtained his master’s degree from the CNSMD de Lyon (France)\, where he studied with Michele Tadini and attended the classes of Martin Matalon and François Roux. He completed his DMA at the University of Texas at Austin under Januibe Tejera\, where he served as Assistant Intructor for the Experimental and Electronic Music Studio. ​ He received the Funarte composition prize from the Brazilian Ministry of Culture at the Biennial of Contemporary Brazilian Music\, the Rainwater Innovation Grant\, and was a finalist at Prix CIME/ICEM and MA/IN Awards. He has collaborated with performers such as PHACE Ensemble\, Vertixe Sonora\, HANATSUmiroir\, Line Upon Line Percussion\, the Orchestra of the National Opera of Lyon\, Soundmap Ensemble\, Atelier xx-21\, Olivier Stankiewicz\, Alice Belugou\, and have been featured at festivals such as MA/IN Festival (IT)\, Ars Electronica Forum Wallis (SWI)\, MUSLAB (ECU)\, SEAMUS (US)\, Lontano (BR)\, Plurisons (BR)\, CNMAT (US)\, Empreintes (FR)\, Electric LaTex (US)\, Festival No Conventional (Colombia).​ \n  \nJames Harley: Wild Fruits: Epilogue\nWild Fruits 5: Epilogue is an electroacoustic soundscape work from the Wild Fruits cycle\, begun in 2003. The piece includes spoken text taken from Wild Fruits by Henry David Thoreau\, recorded by Jim Bartruff\, and Pilgrim at Tinker Creek by Annie Dillard\, recorded by Anne-Marie Donovan. The sounds are all based on field recordings from various locations\, processed in the studio. Originally conceived as an 8-channel surround-sound work\, Epilogue uses material from the other works in the cycle\, treated in new ways. \nAbout the artist\nJames Harley is a Canadian composer teaching at the University of Guelph. He obtained his doctorate at McGill University in 1994\, after spending six years (1982-88) composing and studying in Europe (London\, Paris\, Warsaw). His music has been awarded prizes in Canada\, USA\, UK\, France\, Austria\, Poland\, Japan\, and has been performed and broadcast around the world. Recordings include: Neue Bilder (Centrediscs\, 2010)\, ~spin~: Like a ragged flock (ADAPPS DVD\, 2015)\, Experimental Music for Ensembles\, Drums\, and Electronics\, with Philippe Hode-Keyser (ADAPP CD\, 2022)\, Lithophonica\, with Gayle Young (Farpoint 2025) . As a researcher\, Harley has written extensively on contemporary music. His books include: Xenakis: His Life in Music (Routledge\, 2004)\, and Iannis Xenakis: Kraanerg (Ashgate\, 2015). As a performer\, Harley has a background in jazz\, and has most recently worked as an interactive computer musician. \n  \nRaul Masu and Francesco Ardan Dal Ri: Inside the metal plate\nThis 5.1 acousmatic work is entirely constructed from the resonant behaviour of a single metal plate\, activated through a set of physical and acoustic excitations. All sound material is generated via controlled feedback processes\, bowing\, mallets\, and additional excitation techniques that probe the material responses and instabilities of the plate. Feedback is not employed as an effect\, but as a generative mechanism\, where the plate\, transducers\, amplification\, and acoustic space form a dynamic system capable of producing emergent sonic behaviours. The resulting sounds do not represent the plate\, but rather make audible its internal activity\, thresholds of stability\, and variations in resonant response. The 5.1 spatial distribution places these sounds around the audience with the intention of situating the listener inside the resonant body itself. Through an immersive aural experience\, the work proposes a form of embodied self-perception\, in which listening is no longer external to the sound object but coincides with it: the audience does not listen to the plate\, but listens as the plate\, temporarily adopting its vibrational perspective. \nAbout the artists\nRaul Masu (1992) is Professor of Electroacoustic and Multimedia Composition at the Conservatories of Trento (Italy). He holds a PhD in Digital Media from Universidade Nova de Lisboa and adjunct faculty in Computational Media and Arts\, Hong Kong University of Science and Technology Guangzhou (China). His compositional practice includes works\, presented in festivals\, conferences\, concerts\, and performances in 10 counties. He has published approximately 70 papers in international venues in the fields of electronic music (NIME\, TISMIR\, Organised Sound\, Audio Mostly\, Sound and Music Computing) and interactive technologies (CHI\, DIS\, TEI). \nFrancesco Ardan dal Ri began his musical career as an electric guitarist and thereminist and continues to collaborate with artists on both regional and international scenes\, working in live performance contexts as well as in recording studios. Over time\, his interests have progressively shifted toward contemporary and experimental music\, with a particular focus on the creative possibilities offered by software-based systems and electronic instruments\, both commercial and self-designed. He earned degrees Electronic Music from the Conservatory of Trento with top marks. This trajectory led him to pursue a PhD at the Department of Information Engineering and Computer Science (DISI)\, University of Trento under the supervision of Prof. Nicola Conci\, focusing on artificial intelligence and deep learning applied to audio signals. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/listening-room-1-2/
LOCATION:Hamburg University of Technology\, Building A (A 0.18)\, Am Schwarzenberg-Campus 1\, Hamburg\, 21073\, Germany
CATEGORIES:12-05,Listening Room,Music
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260512T110000
DTEND;TZID=Europe/Amsterdam:20260512T180000
DTSTAMP:20260429T121824
CREATED:20260421T094037Z
LAST-MODIFIED:20260421T095133Z
UID:10000141-1778583600-1778608800@icmc2026.ligeti-zentrum.de
SUMMARY:Installation | Miles Friday: "Breathwork"
DESCRIPTION:Breathwork is a twelve-channel sound installation where loudspeakers become breathing bodies. Each loudspeaker is encased in an inflatable bag that swells and contracts in response to low-frequency drones\, forming a slow\, ever-shifting breath-like choreography. \nWithin this field of motion\, clouds of layered just intonation partials drift in and out of perception\, while low frequencies create a base of acoustic beating and Shepard tone-esque glissandos. By transforming the loudspeaker into a pneumatic pump\, Breathwork reimagines the loudspeaker as a tool for visual synthesis\, where vibrations in the air animate inflatables as kinetic sculptures—synthetic lungs whose movement create polyrhythms that can be both seen and heard. \nAll audio is generated live via SuperCollider and is running on two Bela Mini Multichannel Expanders. \nAbout the artist\nMiles Jefferson Friday is an artist who focuses on sound as his primary medium. Building new instruments\, composing music\, designing sound sculptures\, and creating immersive installations\, his practice invites us to reconsider how we hear and listen. Miles is currently an Assistant Professor of Digital Music at University of Texas at San Antonio\, holds a DMA and MFA from Cornell University\, and an MA from the Eastman School of Music. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/installation-miles-friday-breathwork-2/
LOCATION:Hamburg University of Technology\, Building A (Foyer)\, Am Schwarzenberg-Campus 1\, Hamburg\, 21073\, Germany
CATEGORIES:12-05,Installation
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
END:VCALENDAR