BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//ICMC HAMBURG 2026 - ECPv6.15.20//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://icmc2026.ligeti-zentrum.de
X-WR-CALDESC:Events for ICMC HAMBURG 2026
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:Europe/Amsterdam
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20250330T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20251026T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20260329T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20261025T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20270328T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20271031T010000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;VALUE=DATE:20260510
DTEND;VALUE=DATE:20260528
DTSTAMP:20260513T234817
CREATED:20260415T101343Z
LAST-MODIFIED:20260421T200939Z
UID:10000110-1778371200-1779926399@icmc2026.ligeti-zentrum.de
SUMMARY:[Off-ICMC] Installations and Performances: "Transition | Tension | Potential"
DESCRIPTION:Between platforms 3 and 4 at Harburg railway station\, the displays of the Kunstverein Harburger Bahnhof present materials and objects in transformation. They don’t show finished forms\, but processes and contradictions.  \nThe work is occasionally complemented by sound-based and performative interventions. It explores tension\, transitions\, and what emerges in between —between things\, within situations — and how this shapes the way we perceive the world.  \nOpen 24/7\nNo registration required  \n  \nThe Off-ICMC\nMusic is what brings us together\, even when everything else pulls us apart.\nMusic everywhere—it is part of our everyday lives. And yet\, we’re hearing it performed live on analog instruments less and less. Instead\, it often reaches us through speakers or headphones\, as files\, from the cloud. What does music mean to you? What does it sound like today? Where does it begin—and where does it end?\nThe ligeti center invites you to listen more closely and discover new sounds—to explore\, experiment\, and play. This year\, ICMC HAMBURG 2026 revives an old tradition: the Off-ICMC\, a free and accompanying festival curated for the general public and anyone curious about computer music. \nAll Off-ICMC events are free of charge.  \n\n  \n \n 
URL:https://icmc2026.ligeti-zentrum.de/event/off-icmc-installations-and-performances-transition-tension-potential/
LOCATION:Kunstverein Harburger Bahnhof\, Hannoversche Straße 85 (at the train station above platform 3&4)\, Hamburg\, 21079\, Germany
CATEGORIES:10-05,Off-ICMC
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260513T070000
DTEND;TZID=Europe/Amsterdam:20260513T200000
DTSTAMP:20260513T234817
CREATED:20260430T152154Z
LAST-MODIFIED:20260430T152617Z
UID:10000239-1778655600-1778702400@icmc2026.ligeti-zentrum.de
SUMMARY:Installation | Healing Soundscapes (invited)
DESCRIPTION:Healing Soundscapes are developed and implemented for waiting and working areas in the University Medical Center Hamburg-Eppendorf.   \nThe installation presented at ICMC HAMBURG 2026 was developed for the waiting area of the emergency department and is played there 24/7. It is intended to create a positive atmosphere in the waiting area\, thereby making the wait more pleasant for patients.  \nThe Healing Soundscapes project is part of the interdisciplinary ligeti center\, which is funded by the Federal Ministry of Research\, Technology and Space (BMFTR) and the City of Hamburg as part of the Federal-State Initiative Innovative University.  \n 
URL:https://icmc2026.ligeti-zentrum.de/event/installation-healing-soundscapes-invited/2026-05-13/
LOCATION:Hamburg University of Technology\, Building J\, Library (Rotunde)\, Denickestraße 22\, Hamburg\, 21073\, Germany
CATEGORIES:Installation
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260513T090000
DTEND;TZID=Europe/Amsterdam:20260513T103000
DTSTAMP:20260513T234817
CREATED:20260415T133500Z
LAST-MODIFIED:20260511T155501Z
UID:10000082-1778662800-1778668200@icmc2026.ligeti-zentrum.de
SUMMARY:Paper Session 5b: AI\, Machine Learning & Pedagogy
DESCRIPTION:Session Chair: Rodrigo Cadiz\n\nPaper Abstracts\nJeff Kaiser and Gregory Taylor: “Building Loopers: A Pedagogical Framework for Teaching Creative Software Design Through Iterative Tool Construction in Max\, gen~\, and RNBO”\nThis paper introduces the ideas behind our open-access project “Building Live Loopers in Max.”1 The project presents a hybrid pedagogical and technical framework in which students learn signal processing concepts by constructing live-looping tools in Max\, gen~\, and RNBO. By engaging with buffer operations\, timing structures\, playback manipulation\, and parameter mapping\, students develop technical fluency and musical understanding simultaneously. We introduce a sequence of modular\, step-by-step looper designs\, a color-coded instructional method for visualizing patcher development\, and a cross-environment workflow that reinforces transferable pro-\ngramming habits. Our coursework is designed to be sufficiently open-ended that students\, while grounded in familiar musical contexts\, are encouraged to exercise curiosity and explore creative directions beyond the methods presented. Drawing on Dehaene’s work on curiosity and Eagle-\nman’s writing on relevance\, the design aims to engage intrinsic motivation and support students in forming novel connections and actively experimenting with musical ideas. This approach positions looper construction as a bridge between creative music-making and computational thinking\, supporting both performance and pedagogical outcomes. \nNicolas Brochec and Jean-Louis Giavitto: “Automatic Following of Flute Playing Techniques for Real-Time Mixed Music: A Case Study with Antescofo and ipt~”\nThis paper investigates how real-time recognition of instrumental playing techniques can extend automatic score following beyond the limits of pitch-based alignment. While systems such as Antescofo provide robust and largely plug-and-play score following\, their listening model is primarily designed for stable\, pitched events aligned with a fixed symbolic score. This makes them difficult to adapt to extended techniques\, unpitched sounds\, and musical forms involving partial improvisation or open notation. To address these limitations\, we explore a hybrid approach that combines multiple listening machines with complementary capabilities and allows dynamic switching between them during performance according to the musical context. Specifically\, we integrate Antescofo with ipt˜\, a real-time playing technique recognition system based on lightweight machine learning models. We focus on the integration of real-time instrumental playing technique recognition as a means to enrich the listening process and support technique-aware navigation of the score. We evaluate this approach on the case of extended flute techniques\, assessing both the feasibility of technique aware following and the trade-off between system generality and performance. Results suggest that learning-based listening modules provide a practical compromise: they improve\nrobustness for specific techniques while preserving much of the plug-and-play character supporting multiple works and performers. The results highlight a promising balance between generality\, specificity\, and performative robustness.\nColton Arnold\, Zhaohan Cheng and Ajay Kapur: “AI Framework for Dynamic Robotic Instrument Calibration”\nThis paper presents a data-driven calibration framework for robotic musical instruments based on a hybrid ensemble model that combines K-nearest neighbors (KNN) and a multi-layer perceptron (MLP). KNN anchors predictions to recorded acoustic measurements\, while the MLP enables nonlinear generalization and smooth interpolation across the instrument’s playable range. A distance-dependent blending strategy integrates the two models\, improving consistency across sparse and dense data. The proposed approach produces stable and repeatable calibration estimates for both pitched and non-pitched instruments\, outperforming standalone models across a range of sampling conditions. This work establishes a scalable foundation for automated calibration in robotic musical systems.\n 
URL:https://icmc2026.ligeti-zentrum.de/event/paper-session-5b-ai-machine-learning-pedagogy/
LOCATION:Hamburg University of Technology\, Building H\, Ditze Hörsaal (H 0.16)\, Am Schwarzenberg-Campus 5\, Hamburg\, 21073\, Germany
CATEGORIES:13-05,Paper Session,Session
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260513T090000
DTEND;TZID=Europe/Amsterdam:20260513T103000
DTSTAMP:20260513T234817
CREATED:20260415T134754Z
LAST-MODIFIED:20260512T074538Z
UID:10000130-1778662800-1778668200@icmc2026.ligeti-zentrum.de
SUMMARY:Paper Session 5a: Novel Concepts in 3D Audio
DESCRIPTION:Session Chair: Serge Lemouton\nPaper abstracts\nLaura Call Gomez\, Gabriel Decker\, Jayson Faupel\, Aditya Rajesh Pawar\, Jacob Westerstahl and Henrik von Coler: “BIKES: A Mobile Networked Music Instrument in Interdisciplinary Research and Education”\nThis paper describes how a mobile\, networked instrument for music and sound art is used as a platform for interdisciplinary research and creative practice in higher education. The long-term project\, BIKES\, provides students with the opportunity to engage with real-world challenges by com-\nbining music technology\, experimental composition\, and industrial design. Project activities include interactive installations and sound rides\, iterative development of hardware and software\, as well as the design and fabrication of a new prototype for exhibition contexts. After its first year\, BIKES demonstrates how the multifaceted nature of a modular instrument can facilitate collaborative work and increase the visibility of student-led research and development.\nTeresa Carrasco: “Sonic Urgency: Exploring Perceptual\, Sociopolitical\, and Participatory Dimensions of Spatial Listening”\nThis paper explores spatial listening as a multidimensional practice linking perception\, phenomenology\, and sociopolitical discourse. It outlines psychoacoustic foundations of sound localization and traces key listening theories—from reduced listening and acoustic ecology to\nspectromorphology and spatial dramaturgy—framing listening as an active\, interpretive process. It then examines phenomenological\, participative\, and political aspects\, proposing spatial listening as an embodied\, situated\, and relational practice\, and calls for expanded listening models suited to contemporary sonic environments.\nMauro Cantonetti\, Paolo Malpeli\, Giuseppe Rizzo\, and Alessandro Anatrini: “MetaConcert: A Shared VR Audio-Visual Experience Model Reducing User Isolation Through Synchronized 360 Video on HMDs and HOA Playback on a Multichannel Dome”\nWe introduce MetaConcert\, a system that integrates a VR head-mounted display with multichannel loudspeaker-dome audio. It employs a dedicated workflow for 360° video capture\, Ambisonic audio recording\, and dome-oriented rendering. A key component is a synchronization solution using OSC communication between the WebXR video player and SuperCollider for audio rendering. The system renders third-order Ambisonics\, decoded for a multichannel in-room speaker array. Synchronizing 360° video playback in WebXR with multichannel audio in SuperCollider via OSC messages enables a fully immersive\, headphone-free experience\, making it ideal for shared listening environments. Framed within the concepts of presence and plausibility [1]\, we discuss how dome-based listening reduces the isolation typical of HMD use and fosters scenarios of enhanced social presence \n 
URL:https://icmc2026.ligeti-zentrum.de/event/paper-session-5a-novel-concepts-in-3d-audio-including-wireless-multi-channel-audio-as-well-as-physical/
LOCATION:Hamburg University of Technology\, Building H\, Audimax 1\, Am Schwarzenberg-Campus 5\, Hamburg\, 21073\, Germany
CATEGORIES:13-05,Paper Session,Session
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260513T110000
DTEND;TZID=Europe/Amsterdam:20260513T123000
DTSTAMP:20260513T234817
CREATED:20260415T134319Z
LAST-MODIFIED:20260511T160523Z
UID:10000084-1778670000-1778675400@icmc2026.ligeti-zentrum.de
SUMMARY:Paper Session 6b: AI & Machine Learning
DESCRIPTION:Session Chair: Nicola L. Hein\n  \nPaper abstracts\nGiovanni Roma and Alba Francesca Battista: “Supervised Memory: How Machines Can Preserve What We Cannot Hold”\nThis paper presents an AI framework for preserving electroacoustic works threatened by technological obsolescence and vanishing performance knowledge. Through supervised annotation as “composition of comprehension\,” we transform machine learning into active interpretation rather than passive archiving. Our approach employs a two-level vocabulary system distinguishing universal from composer-specific notational elements\, enabling systematic knowledge transfer across diverse repertoires. We ground the framework in one implemented reconstruction—Jonathan Harvey’s Ricercare una melodia from incomplete documentation—and outline two further experimental\nfronts: analyzing context-dependent notation in Stockhausen’s Solo\, and exploring annotation possibilities in Boulez’s spatial coordinates. The methodology treats annotation not as neutral transcription but as interpretive translation\, where each label embeds aesthetic decisions and performance practice. Harvey’s implementation revealed how editorial simplification between 1984 and 2003 editions created cascading performance challenges\, validating our recovery of embedded procedural knowledge. The framework progresses from mechanical reproduction through systematic reading to conscious reactivation\, establishing foundations for computational preservation while acknowledging fundamental limits. We argue that effective preservation requires not static archives but living traditions maintained by transparent\, contestable machine interpretations. This positions AI-based complements as participants in musical preservation rather than mere repositories\, preserving both structural relationships and the reasoning patterns that animate them. \nAbhirup Saha\, Hans-Ulrich Berendes\, Meinard Müller\, and Ben Maman: “Snapping Matters: Context-Aware Onset Refinement for Automatic Music Transcription”\nPrecise note-level annotations are critical for training automatic music transcription (AMT) systems\, in particular note-onset labels\, which form a core component of many recent AMT systems. However\, high-quality annotations for real-world recordings are scarce. Sequence-level score–audio alignment methods such as dynamic time warping provide only coarse correspondence\, making a local refinement step necessary. This refinement step\, known as snapping\, adjusts aligned score onsets using peaks in a neural onset posteriorgram and often determines whether weakly aligned score–audio pairs become usable training data at all. Despite its practical importance\, snapping is typically treated as a simple post-processing heuristic and implemented with greedy local decisions. We present a systematic analysis of snapping strategies for training instrument-agnostic transcribers\, demonstrating that snapping is essential for learning from weakly aligned data. Building on this\, we formulate snapping as a per-pitch assignment problem and solve it via bipartite graph matching\, yielding context-aware onset decisions under overlapping refinement windows and uncertain initial alignments. Extensive cross-dataset experiments across piano\, chamber\, and orchestral recordings show improved onset alignment and transcription accuracy over greedy snapping\, with gains increasing for wider snapping windows and coarser initial alignments. Qualitative examples are provided on our project page: https://abhirupsaha8.github.io \nYu Foon Darin Chau and Andrew Horner: “Classical Music Mashup System and Compatibility Heuristics”\nWe investigate symbolic classical music mashups and introduce a retrieval-based pipeline for generating them. Unlike audio-domain mashups\, symbolic mashups offer perfect voice isolation and allow for post-generation reinterpretation of tempo\, dynamics\, and instrumentation. While prior work in audio mashups emphasises harmony\, rhythm\, and balance\, symbolic mashups in classical repertoires remain underexplored and lack clear compatibility heuristics. To this end\, we conduct controlled listening tests on classical music excerpts to isolate factors shaping perceived compatibility. Results indicate effective mashups should respect the recognizability of motivic materials\, underlying cadential logic\, and be presented polyphonically. We designed a symbolic mashup pipeline for classical piano music around these findings that maximises pairwise piece compatibility. We discuss implications and limitations for algorithmic composition\, pedagogical tools\, and future extensions to broader styles\, longer forms\, and richer evaluative methodologies. \n 
URL:https://icmc2026.ligeti-zentrum.de/event/paper-session-6b-ai-machine-learning/
LOCATION:Hamburg University of Technology\, Building H\, Ditze Hörsaal (H 0.16)\, Am Schwarzenberg-Campus 5\, Hamburg\, 21073\, Germany
CATEGORIES:13-05,Paper Session,Session
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260513T110000
DTEND;TZID=Europe/Amsterdam:20260513T123000
DTSTAMP:20260513T234817
CREATED:20260415T140923Z
LAST-MODIFIED:20260513T102603Z
UID:10000131-1778670000-1778675400@icmc2026.ligeti-zentrum.de
SUMMARY:Paper Session 6a: Immersive Media & 3D Audio
DESCRIPTION:Session Chair: Henrik von Coler\n\nPaper abstracts\nFelipe Otondo and Leonardo Santos: “Listening Across Spaces: Perceptual Evaluation of an Ambisonics-Based Sound Installation”\nThis paper explores how immersive listening to natural soundscapes is shaped by the spaces in which it unfolds. Using second-order Ambisonics field recordings rendered through a third-order Ambisonics decoding scheme\, five natural soundscape excerpts were reproduced over calibrated 16-loudspeaker Genelec arrays in two contrasting venues: an acoustically controlled laboratory and an untreated museum gallery. Listener evaluations addressed presence\, envelopment\, timbral clarity\, stability and depth using a perceptual framework grounded in recent immersive audio literature. The results reveal distinct perceptual profiles across venues\, where spatial precision emerges in controlled conditions and reverberation contributes to a more diffuse sense of overall immersion in the museum. The study highlights immersion as a situated experience shaped by sound content\, room acoustics\, and reproduction conditions\, with implications for artistic sound installations and exhibition design \nYu Chia Kuo: “Tree Rings: Ecological Memory and Linguistic Traces in an Immersive Dome Composition”\nTree Rings is a site-specific dome composition that weaves ecological recordings\, linguistic material\, and generative 3D forms into a layered audiovisual environment. Granular and spectral processing emphasize microscopic textures\, while VBAP spatialization and text-to-3D diffusion produce concentric structures that expand toward landscape-scale processes. Treating environmental sound and language as parallel acoustic and cultural archives\, the work frames ecological memory as an immersive\, temporally scaled experience\, offering a metaphor-driven approach to spatial sound and generative visual design within research-creation contexts.\n 
URL:https://icmc2026.ligeti-zentrum.de/event/paper-session-6a-immersive-media-3d-audio/
LOCATION:Hamburg University of Technology\, Building H\, Audimax 1\, Am Schwarzenberg-Campus 5\, Hamburg\, 21073\, Germany
CATEGORIES:13-05,Paper Session,Session
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260513T110000
DTEND;TZID=Europe/Amsterdam:20260513T130000
DTSTAMP:20260513T234817
CREATED:20260421T123032Z
LAST-MODIFIED:20260423T175331Z
UID:10000169-1778670000-1778677200@icmc2026.ligeti-zentrum.de
SUMMARY:Workshop | Dennis Scheiba & Julian Rohrhuber: User = Developer: How to contribute to SuperCollider development
DESCRIPTION:SuperCollider\, being a free and open-source project\, stands in contrast to non-open projects in that it doesn’t impose technical and legal barriers to users accessing and modifying its inner workings.\nRather than a strict separation\, this allows for a gradient between user and developer. There are still\, however\, technical and social complexities involved in contributing to such a big project\, which this workshop seeks to address.\nIt will guide through the landscape of the SuperCollider project\, easy passages as well as dense forests\, and show how to participate in development\, at all levels\, with or without coding.\nWith this workshop\, we hope to invite participation and spread knowledge about the interesting experience of maintaining and extending a widely used computer music language. \n  \nRequirements\nNone \n  \nWorkshop registration\nPlease register via Pretix in order to participate in the workshop. There are no additional costs.  \n  \nAbout the workshop facilitators\nDennis Scheiba is an artistic and research associate at the Robert Schumann Hochschule Dusseldorf. He works as a ¨composer\, live coder\, and audio-visual artist with a Special interest in multi-spatiality and streaming technologies. He has performed at MIT\, Johns Hopkins University\, ZKM\, KUG\, and IRCAM. Scheiba has a background in mathematics and machine learning and currently researches on audio-only VR environments\, JIT-compilation in DSP environments\, WebRTC streaming\, and packaging of audioprojects. He has co-managed the two most recent Releases of SuperCollider\, versions 3.14 and 3.15. \nJulian Rohrhuber works in contemporary media theory that bridges philosophy\, informatics\, anthropology and art. As a professor at the Robert Schumann Hochschule in Düsseldorf he has established the subject of epistemic media\, which aims to ground research independent of the distinction between science and art. For the last two decades\, he has been involved in the development of computer languages for experimental programming and music informatics\, such as SuperCollider and TidalCycles. Publications are concerned with diverse topics like the history of programming and mathematics\, patents and algorithms\, art theory\, philosophy of science\, live coding\, sonification\, and realism in documentary film. Recent texts address philosophy of time\, algorithmic causality\, and the citizenship of abstract entities. \n 
URL:https://icmc2026.ligeti-zentrum.de/event/workshop-dennis-scheiba-julian-rohrhuber-user-developer-how-to-contribute-to-supercollider-development/
LOCATION:ligeti center\, 9th floor\, Veritaskai 1\, Hamburg\, 21079\, Germany
CATEGORIES:13-05,Workshop
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260513T110000
DTEND;TZID=Europe/Amsterdam:20260513T173000
DTSTAMP:20260513T234817
CREATED:20260421T182305Z
LAST-MODIFIED:20260428T114812Z
UID:10000186-1778670000-1778693400@icmc2026.ligeti-zentrum.de
SUMMARY:Listening Room 2
DESCRIPTION:Fixed Media | Program Overview\nCrown Shyness\nJeonghun Hyun \nEntomology#2\nThanos Polymeneas-Liontiris \nJetlag – Time Difference\nRay Tsai \nLazy whirls of glow\nJuan J.G. Escudero \nPumma\nEmilio Casaburi \nQuivering Silk\nYi-Hsien Chen \nSonic Echoes of Ink\nPingting Xiao \nThe Luminosity of the Yugen Mist\nXiaoyu Su \nVocalise\nPak Hei Leung \n  \nAbout the pieces & artists\nJeonghun Hyun: Crown Shyness\nThis work is inspired by crown shyness\, a natural phenomenon in which the canopies (branches and leaves) of trees grow while maintaining a consistent distance without touching one another. Although individual trees exist separately\, they are perceived as a single forest when viewed from a distance. In this sense\, the piece musically explores the idea that individuals and others may feel divided due to conflict and discord\, yet from a broader perspective\, they form a harmonious whole. The diverse sounds used in the work reflect the characteristic behavior of tree canopies that avoid encroaching upon one another’s space. Each sound is therefore designed to occupy a distinct position within the stereo field\, maintaining its own spatial identity. Additionally\, just as a tree extends from thicker branches to progressively finer ones\, the sonic material evolves from dense\, large-scale textures into increasingly subdivided and delicate sounds. This spatial and morphological development metaphorically reveals both the independence of individual entities and their coexistence within a larger structural framework. Through these compositional strategies\, the work seeks to musically reflect on human relationships formed within a social context\, and to contemplate the sense of distance\, respect\, and attitudes of coexistence required within those relationships. \nAbout the artist\nJeonghun Hyun is a composer specializing in electronic music\, with a keen interest in the convergence of acoustic tradition and technological innovation. Having studied under Jinwoong Kim and Shinae Kang\, he explores the intersection of instrumental performance and digital sound processing. His works often employ custom programming and real-time sound manipulation techniques. Recently\, his creative research was recognized at ICMC 2025 in Boston\, where his work was presented. Committed to expanding the boundaries of contemporary music\, Hyun continues to refine his expertise in the evolving landscape of electroacoustic composition. \n  \nThanos Polymeneas-Liontiris: Entomology#2\nEntomology#2 (2025) is an acousmatic work dedicated to the secret life of insects. It follows on Tettix-A’ (2022 – inspired by the song of cicadas) and Entomology#1 (2024 – an acousmatic miniature based on the voice of a single imaginary insect). Entomology#2 invites the ear to a bustling landscape: an imaginary pond\, a hyper-realistic dense forest\, a place where countless microscopic flying voices weave their own world. The material of Entomology#2 derives from recordings of a prepared grand piano (in German Flügel\, in Dutch Vleugelpiano: i.e. piano with wings). The piece is based on the navigation of a corpus made of these recordings\, processed to such extent as to be stripped from any obvious piano connotation: metaphysically the notion of “wings” is the only association kept from those original prepared piano recordings. The corpus of these processed sounds unfolded into a layered and multi-dimensional field\, inspiring an exploration similar that of spatial-sonic explorations of a filed-recording. The result of such explorations is a soundscape filled with densities\, like countless flying beings swarming and coexisting. Entomology#2 draws the listener to immerse into a synthetic\, living-like system of microscopic organisms\, where communication\, competition\, and adaptation unfold collectively in an endless dance. \nAbout the artist\nThanos Polymeneas-Liontiris is a composer\, sound artist and Assistant Professor (Music & Interactive Media)\, at National & Kapodistrian University of Athens\, Greece. His practice comprises computer-aided compositions\, interactive audiovisual installations\, immersive audiowalks\, generative art\, interactive music for dance\, theatre and intermedia performances. He has obtained a BA in Double Bass\, and a BA in Electronic Music Composition from Rotterdam Conservatoire\, while following courses at the Institute of Sonology (Royal Conservatoire of The Hague) and at IRCAM. He completed two MA degrees\, both with distinction: in Art and Technology (Polytechnic University of Valencia) and in Creative Education (Falmouth University). In 2019 he concluded his PhD research aided by a fully funded CHASE-AHRC scholarship at University of Sussex. He has taught in Higher Education since 2011 (Falmouth University\, University of Sussex\, University of Brighton\, Ionian University and National & Kapodistrian University of Athens). His works have been presented\, among others\, at Tectonics Festival\, Modern Body Festival\, Athens and Epidaurus Festival\, Holland Festival\, Todays Arts\, Attenborough Centre\, Kalamata International Dance Festival\, The Athens Concert Hall\, Onassis Foundation\, Biennale of Young Artists from Europe and the Mediterranean. His publications encompass subjects related to Pedagogy\, Technology and Aesthetics. \n  \nRay Tsai: Jetlag – Time Difference\nJetlag – Time Difference is a fixed-media electroacoustic work that explores the relativity of time perception and relational temporality through sound. The piece juxtaposes three overlapping yet unsynchronized temporal systems: biological time represented by heartbeats and bodily rhythms\, social time shaped by daily routines and notifications\, and mechanical time articulated through clock mechanisms and pulses. Through processes of temporal displacement\, fragmentation\, reversal\, and spectral transformation\, these functional temporal references gradually lose their stability and dissolve into textural sonic states. Beyond individual perception\, the work also reflects intersubjective temporality—how differing rhythms and internal clocks remain subtly connected through traces of memory\, anticipation\, and interaction\, even under temporal dislocation. Rather than resolving into synchronization\, Jetlag – Time Difference presents time as a fragile\, shifting network of relations that persists in misalignment. \nAbout the artist\nRay Tsai (Tsai Yi-Jui)\, born in Hsinchu and currently studying at National Yang Ming Chiao Tung University\, is a DJ\, music producer\, and new media artist. His work spans sound art\, electroacoustic music\, and video installation\, using experimental sonic structures to explore the relationship between technology and perception. Under the alias †Egothy†\, he is active in the underground electronic music scene\, performing noise\, deconstructed electronics\, and other avant-garde styles that shape sensory experiences oscillating between chaos and order. \n  \nJuan J.G. Escudero: Lazy whirls of glow\nThe combinatorial structure of a triangulated dodecahedral three-manifold is used in the formal design of this work. This type of spaces is considered for modelling the spatial structure of multi-connected universes. The basic sound materials were recorded in an acoustic piano which\, due to certain circumstances\, remained in silence for a long time. \nAbout the artist\nJuan J.G. Escudero is a composer and researcher based in Madrid (Spain). He received his musical education at several centres and conservatoires and studied composition with Francisco Guerrero Marín in Madrid. He has carried out research and teaching activities in mathematics\, physics and music technology at various universities. The results of his research in the fields of algebra\, geometry and astronomy -published in scholarly journals and books- have been some of the main guides to formalization procedures. Harmonizations of aperiodic ordered temporal sequences\, which are on the basis of the formal and rhythmic structures play a major role in several of his instrumental and acousmatic works. More recent formal approaches are related with the analysis of the topological invariants of aperiodic tiling spaces and the construction of singular hypersurfaces in algebraic geometry. Extramusical influences are connected mainly with philosophy\, poetry and visual arts. \n  \nEmilio Casaburi: Pumma\nThe past is no longer forbidden: through technology\, lost relationships and forgotten spaces can be revisited. ‘Pumma’ seeks to narrativize this experience\, drawing on old VHS recordings of my family as its sonic foundation. The piece unfolds a journey across space and time\, in search of renewed connections with lost ones. It blends acousmatic syntax\, sonic imagery and textual fragment. An attempt to harness the full potential of the acousmatic condition to project a narrative of memory\, distance\, and re-discovery. \nAbout the artist\nEmilio Casaburi (b. 1999) is a sound artist and composer from Italy. His artistic output includes acousmatic compositions\, field recordings\, audiovisual works\, and installations. He graduated in Electronic Music under the guideship of Alessandro Cipriani in Frosinone and is now studying at the Institute of Sonology in Den Haag. \n  \nYi-Hsien Chen: Quivering Silk\nQuivering Silk is a fixed-media electronic work\, currently realized in stereo\, with the possibility of diffusion in an eight-channel format. All sound materials in the piece are captured from the Chinese 21-string zither (guzheng). The guzheng is capable of producing a rich spectrum of timbres through a wide variety of plucking\, sliding\, and glissando and sweeping techniques. In this work\, these instrumental sounds are used to electronic transformation\, layering\, and distortion\, gradually unfolding into large-scale waves of sound that intends to immerse the listeners. Within these sonic waves\, traces of identifiable guzheng techniques occasionally emerge; at other moments\, however\, the causal relationship between hand gesture and sound becomes ambiguous. This shifting perceptual boundary invites the listeners to reimagine the instrument beyond its physical constraints and to imagine new possibilities for its vibrational behavior. In this work\, the title Quivering Silk refers to the vibration of the guzheng strings\, which is not limited to the physical vibration produced by finger gestures\, but also refers to an electronic vibration shaped through digital sound processing and transformation. \nAbout the artist\nYi-Hsien Chen is a Taiwanese composer. He has received degrees from Taipei National University of the Arts and National Taiwan Normal University. In 2016\, he began to pursue Ph.D. with major in music theory and composition at UC San Diego where he studied with Katharina Rosenberger\, Chinary Ung\, and Lei Liang who is his advisor and committee. He was awarded with full scholarship from UC San Diego for five years. He is currently teaching at the Department of Music in National Sun-Yat Sen University. Chen engages in composing for a wide variety of musical styles and involving multi-disciplinary collaboration. He has created music spanning across various instrumentations including orchestra\, ensemble\, electroacoustics\, theater music\, and soundtrack. His works has been selected and performed by renowned ensembles at festivals\, such as Mivos Quartet in “June in Buffalo\,” National Taiwan Symphony Orchestra in the competition of “Voice of the New and Brilliant – The Sound of Formosa\,” and “Weiwuying International Music Festival.” \n  \nPingting Xiao: Sonic Echoes of Ink\nThis composition\, Sonic Echoes of Ink\, explore in the theory of embodied music cognition. It focuses on the relationship between body movement\, piano performance\, and sound manipulation. All sound materials are recorded from the piano\, including traditional keyboard playing and string plucking\, constructing sound traces reminiscent of ink painting through variations in single notes\, chords\, and resonant timbres. Additionally\, the work incorporates EMG (electromyography) sensor data\, collecting changes in muscle tension in the performer’s forearm and mapping the data to sound parameters. This allows the tension\, release\, and movement continuity to directly participate in the generation of musical structure. In this way\, music is no longer merely the result of being “played”\, but a process of co-writing by the body\, movement\, and sound. \nAbout the artist\nPingting Xiao\, a PhD student at the University of Manchester. I’m interested in embodied music cognition interacts with cultural heritage and creative technology to create motion-responsive performance and visual works. She is dedicated to integrating Chinese traditional culture with music interaction\, exploring how ancient cultural elements can be harmonized with modern interactive technologies. She also seeks to inspire and lead a community of like-minded composers in China\, encouraging collaboration and participation in creative endeavours. \n  \nXiaoyu Su: The Luminosity of the Yugen Mist\n“The Luminosity of the Yugen Mist” is a fixed media (acousmatic) work that constructs a surreal sonic architecture from the organic timbres of flute\, bamboo flute\, and piano with extended techniques. Divorced from live performance\, the piece focuses entirely on the spectral transformation and spatial reshaping of these acoustic sources. Rather than depicting a clear narrative\, the music remains suspended in an unstable perceptual state—sound is continuously perceived but never fully resolved. Informed by the Japanese aesthetic of Yugen (subtle grace and mysterious depth)\, the work approaches sound as something partially concealed rather than fully revealed. The recorded materials function as indistinct traces of the physical world\, heard through a sonic haze rather than presented as fixed representations. Through granular processing and spectral resynthesis\, these concrete sounds are gradually destabilized\, dissolving into a luminous\, synthetic texture. The piece does not seek a final resolution; instead\, it oscillates between obscurity and clarity\, leaving the boundary between the acoustic and the electronic deliberately ambiguous. \nAbout the artist\nXiaoyu Su is a composer and researcher currently based in Japan. He is a first-year Master’s student in the Composition Course at the Graduate School of Showa University of Music\, where he also works as a Teaching Assistant for Harmony. In March 2025\, he graduated with honors from the Digital Music Department of Showa University of Music (Junior College Division). His musical training began with electronic organ studies at the age of five\, followed by pop vocal training during adolescence. He holds a bachelor’s degree from the School of Media and Design at Zhejiang University Ningbo Institute of Technology. Prior to relocating to Japan in 2022\, he worked as a music teacher at Ninghai County Experimental Primary School while engaging in sound design and music production activities in China. His recent works focus primarily on electronic and acousmatic music and have been presented at events including the Showa Digital Music Live (2023\, 2024)\, the 28th Composition Concert (2024)\, and the Inter-College Sonic Arts Festival (ICSAF) 2024. In 2024\, he was selected as one of two presenters for the Graduation Concert at Showa University of Music. He has studied composition under Daisuke Okamoto and Masatsune Yoshio. Currently\, his practice centers on the creation and academic research of electronic music. \n  \nPak Hei Leung: Vocalise\nVocalise (2026) serves as an exploration on the meaning of the voice in the digital age. The piece utilizes SoundID VoiceAI\, an AI voice changer\, to generate audio from the software’s vocal and instrumental packages\, based on my recorded vocal input. What is heard in the work is a compilation of human vocal recordings\, as well as various snippets of audio clips generated from the tool as a response to the recordings. The vocal clips recorded\, varied between around 10-40 seconds\, include free improvisation that explores extended vocal techniques (e.g.\, vocal fry and mouth sounds)\, as well as some gestures or phrases. After generating them\, I selected specific snippets and clips to compile a musical work. In addition to the quality of the sounds\, I am interested in moments that sounds particularly digital: either that there are artifacts or glitch in the sounds\, or that what is being “sung” or “played” is almost impossible for a human performer. Various Digital Signal Processing tools\, such as reverb and tremolos\, are added as suited. As snippets of human voice are integrated as part of the piece alongside AI-generated audio\, it is expected that audience might not be able to distinguish between the two. This resonates with the artistic goal of the piece being to explore the voice – something I perceive as highly connected to one’s identity – in the further digitalized world. This piece also explores possibilities of using the voice (or audio signals in general) to form musical gestures and shapes in different timbres with the aid of AI tools like this. Remarks: according to Sonarworks’ website\, voices that are used in SoundID VoiceAI are from artists who voluntarily worked with them and were compensated. \nAbout the artist\nThe compositions of Pak Hei (Alvin) Leung have been presented in various places in North America\, South America\, Europe and Asia. His music has been performed by music groups including Mivos Quartet\, Transient Canvas\, Rosetta Contemporary Ensemble\, Trio Mythos\, Duo Antwerp and Hong Kong Chinese Orchestra. His works have been featured in occasions including the ICMC\, International Symposium of New Music\, International Review of Composers\, Seoul International Computer Music Festival\, MUSLAB\, SEAMUS National Conference\, CMS National Conference\, SCI National Conference\, NSEME\, Electric LaTex Festival\, VIPA Festival\, June in Buffalo\, CMS Great Lakes Conference\, EMM and Hong Kong Contemporary Music Festival. Alvin is currently a PhD candidate in Music Composition at the University of North Texas. He received a Master of Music degree at Bowling Green State University\, and a Bachelor of Arts in Music from the Chinese University of Hong Kong (CUHK). His principal teachers include Joseph Klein\, Panayiotis Kokoras\, Marilyn Shrude and Wendy Wan-ki Lee. www.alvinleung.com/ \n 
URL:https://icmc2026.ligeti-zentrum.de/event/listening-room-2-3/
LOCATION:Hamburg University of Technology\, Building A (A 0.14)\, Am Schwarzenberg-Campus 1\, Hamburg\, 21073\, Germany
CATEGORIES:13-05,Listening Room,Music
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260513T110000
DTEND;TZID=Europe/Amsterdam:20260513T173000
DTSTAMP:20260513T234817
CREATED:20260421T185112Z
LAST-MODIFIED:20260511T142721Z
UID:10000181-1778670000-1778693400@icmc2026.ligeti-zentrum.de
SUMMARY:Listening Room 1
DESCRIPTION:Fixed Media | Program Overview\nPerseverance: An Artist Rendering\nMikel Kuehn \nThe Archival of Memory in Skin\nJoan Tan \n#paris\nTaito Fushimi \nAsymmetric Stamina\nAndreas Weixler \nCHAOTIC ITINERANCY\nWonseok Choi \nCorrosion Chamber\nHector Bravo Benard \nDew\nTom Bañados Russell \nFully Automated Luxury Music (selected tracks)\nFelipe Tovar-Henao \nLein\nKim Hedås \nOn the transparency of seeing through\nSean Peuquet \nThe Eternalist Paradox\nJuan Carlos Vasquez \nUnwritten Glow\nWen-Chia Lien \nVox Dei\nTomás Koljatic S. \nWhale Song Stranding\nDavid Nguyen \nWhere am I in the Universe?\nHanae Azuma \nDroplet\nJong Gyun Kim \n  \nAbout the pieces & artists\nMikel Kuehn: Perseverance: An Artist Rendering\nIn late February of 2021\, I was astonished to discover that NASA made several raw recordings of the recently landed Mars 2020 Perseverance Rover available to the public. Inspired by the first ever recorded (atmospheric) sound from another planet\, I began fantasizing about what the sonic environment of Mars might be like. This piece was constructed solely from four recordings capturing the sounds of the Martian wind\, the rover driving\, the rover’s mechanical parts (dust blower and various moving components)\, and the laser shots used to examine the properties of rocks. One additional recording was used: the inflight noise of the heat rejection fluid pump (recorded through the mechanical parts since there is no actual sound that propagates thought the vacuum of space). These minimal source sounds were then processed\, spatialized\, and combined/expanded into various suggestive textures. The result is my “artist rendering” of a fantastical narrative of the Rover’s journey though the sonic landscape of Mars. Its title is also a nod to the perseverance within each of us as we learn to navigate through the global pandemic. Perseverance: An Artist Rendering opens with an imaginary camera zooming from deep space onto the lonely flight of the spacecraft as it sets up for entry into the Martian atmosphere\, then lands. In the short sequence immediately following\, most of the source sounds that are used to build the piece are exposed in context with the work’s formal narrative. From this moment on\, the journey moves from fairly literal to fictional\, even absurd\, as the rover drives though multiple sonic terrains such as a “machine” sequence\, a “thunderstorm\,” then encounters various “creatures” as it continues on its strange journey and eventual death. \nAbout the artist\nThe music of American composer Mikel Kuehn has been described as having “sensuous phrases… producing an effect of high abstraction turning into decadence\,” by New York Times critic Paul Griffiths. He has received awards from the Barlow Endowment\, the Chicago Symphony\, Composers\, Inc.\, the Copland House\, the Destellos Competition on Electroacoustic Music\, the Alice M. Ditson Fund\, the Flute New Music Consortium\, the Fromm Music Foundation\, the Guggenheim Foundation\, the League of Composers/ISCM\, and the Ohio Arts Council. Kuehn is professor of composition at the Eastman School of Music where he directs the Electroacoustic Music Studios @ Eastman (EMuSE). mikelkuehn.com \n  \nJoan Tan: The Archival of Memory in Skin\nI am a conflux of cultures — shaped by environments and circumstances I do not fully understand. A living juxtaposition of beliefs\, a contradiction of behaviours. An oxymoron. I’m learning to hold all of these parts together. The piece is built from sound fragments that remind me of childhood: Hokkien soap operas and theatre shows my grandma watched\, the voices of children at the playground where I once used to play\, the creaking of old treadle sewing machines\, the static of ageing radios\, the ticking of analogue clocks\, the English news on TV at 8pm usw. These unlikely combinations of sounds flitter between one another\, at times dissonant\, jarring even\, but always coexisting. When the physical world inevitably fades\, I hope their voices will still remain. \nAbout the artist\nJoan Tan Jing Wen (born 21 April 2000) is a Singaporean composer currently based in Cologne\, Germany. Her recent works places attention\, perception and the fallibility of memories at the forefront of her compositions. She is fascinated by how attention constructs and distorts one’s perceptions\, and shapes one’s entire experience\, both in music and in everyday lives. She believes that every sound triggers a sensory response\, engages ones imagination and evokes emotions through associations. Recognisable sound sources are distorted in her works\, leaving behind crafted gestures and faint memories of what they once were. \n  \nTaito Fushimi: #paris\n#paris is a piece developed during a one-month stay in Paris. It uses audio data circulated on social media and associated with specific locations\, treating these recordings as the environmental sounds of those places. The collected audio is processed through AI learning and generation\, and subsequently recomposed to form the final sound composition. On social media platforms\, cities are primarily consumed as visual objects. On platforms such as Instagram\, on-site sound environments are often replaced by trending music or narration\, and are intentionally muted or edited. As a result\, these audio elements begin to function as urban soundscapes formed within media\, distinct from those of the physical city. This work applies this approach to representations of Paris on social media. By presenting Paris as an auditory experience composed of multiple\, overlapping layers mediated through digital platforms\, the work explores the relationship between sound circulating in digital space and the city\, and offers a reconsideration of how contemporary urban environments are perceived. \nAbout the artist\nTaito Fushimi. Born in Aichi Prefecture in 2003. He is currently a fourth-year student in the Faculty of Policy Management at Keio University\, where he belongs to Akira Wakita Lab. \nHis practice focuses on sounds\, traces\, data\, and bodily sensations that are not treated as primary information within urban environments\, but are instead pushed into the background. Working across diverse media including installation\, sound\, materials\, and participatory works\, he recontextualizes elements that exist within the city yet are processed as noise. Through his works\, he seeks to reconsider how the city is perceived. \n  \nAndreas Weixler: Asymmetric Stamina\nThe electroacoustic multichannel composition was created during a Composer-in-Residence at the VICC – Visby International Composers Centre in Sweden in 2025 in Studio Alpha. All sounds were recorded on the island of Gotland. Studio recordings of electric guitar and voice processed in real time form a fundamental musical framework of the composition. A special source of inspiration were the weekly gatherings of automobile enthusiasts every Wednesday at the harbor of Visby. Carefully restored vintage cars\, American cruisers\, and newly modified vehicles – all equipped with powerful V8 engines\, even a motorcycle. These deep resonant sounds became a central element of the sonic world\, contrasted by the presence of young car posers noisily circling through the night. This urban soundscape stood in striking opposition to the dramatic cries of the seagulls and the creaking of the floating piers in the Baltic Sea harbor. Production tools have included Pro Tools with plugins such as GRM SpaceGrain\, Sound Particles Brightness Panner\, R360 surround reverb\, Seventh Heaven 5.1\, Acon Multiband Dynamics\, and Stratus Reverb 7.0\, as well as Max programming for multichannel live processing\, including granular synthesis\, spectral delay\, FFT filtering\, ring modulation\, and FFT freeze reverb. Credits: Field Recordings: Author2\, Author1 Voice: Author2 Composition\, electric guitar & real-time Processing (Max): Author1 The creation of this work was supported by The Swedish Arts Grants. \nAbout the artist\nAndreas Weixler\, born 1963 in Graz\, Austria\, is a composer for computer music with an emphasis in intermedia realtime processing. He is teaching at the mdw Vienna\, InterfaceCulture in Linz and serves associate university professor at the CMS – computer music studio of Anton Bruckner University in Linz where he initiated the intermedia concert hall the Sonic Lab. Studies of contemporary composition at KUG in Graz\, Austria with diploma by Beat Furrer\, completed by international projects and residencies. \n  \nWonseok Choi: CHAOTIC ITINERANCY\n‘CHAOTIC ITINERANCY’ is a power electronics piece realizing harsh noise and glitch textures. Simple signals pass through an effects chain aimed at heavy distortion to gain saturated textures. Here\, they lose their original forms and are rebuilt into new ones. The listener perceives the deconstructed sound and its remaining essence simultaneously. As the processing shifts\, the listener is placed right in the middle of the distorted sounds’ itinerancy. Three sections themed ‘Accumulation’\, ‘Mutation’\, and ‘Derivation’ form chaotic textures using different methods. They share a goal of presenting fragmented sensations. Yet\, because the methods differ\, the area where itinerancy is felt and the character of the textures become distinct. Through this process\, I sought to find possibilities in excessively damaged materials. I also intended to sonically map this itinerancy by controlling methodologies and detailed elements. \nAbout the artist\nWonseok Choi (b. 1999) is a composer who pursues music situated at the boundaries of genres and media. In the realm of electronic music\, he constructs sounds using signal distortion and degradation as primary materials\, while in the acoustic realm\, he focuses on works that embody post-minimalism and alt-classical styles. His works have been presented by the Korea Electro-Acoustic Music Society (KEAMS)\, and he is currently pursuing a Master’s degree in Electroacoustic Music Composition at Hanyang University. \n  \nHector Bravo Benard: Corrosion Chamber\nThis composition integrates computer-generated sounds with recordings of struck and bowed metal plates. Over time\, these materials are transformed\, recursively processed\, and spatially projected within an immersive environment surrounding the listener. As the piece unfolds\, the sonic textures grow progressively denser and more chaotic\, gradually distorting and ultimately destroying their original source. The title Corrosion Chamber evokes devices used to test the resilience of metals exposed to harsh conditions over time. It also alludes to metaphorical “chambers\,” such as those of government institutions\, where the original intent of laws and policies can be eroded and twisted to serve power at the expense of the public good. It also suggests the decay of rational thought within social-media echo chambers and through the careless use of AI tools. The piece was originally produced in 7th order Ambisonics. \nAbout the artist\nHector Bravo Benard. Originally from Mexico City\, he studied philosophy and music at the University of Victoria (Canada)\, and later at the Xenakis Centre (France)\, the Institute of Sonology and the Royal and Rotterdam Conservatories (Netherlands)\, the National Autonomous University of Mexico\, the University of Washington’s DXARTS Center (USA)\, and the University of Birmingham (UK)\, where he received his Ph.D. He composes sound-based music for acoustic instruments\, live electronics\, and fixed media\, with a focus on timbral and spatial elements\, and natural phenomena such as non-linear dynamical systems. Some of his main teachers over the years include Agostino Di Scipio\, Julio Estrada\, Scott Wilson\, Clarence Barlow\, Paul Berg\, Gilius van Bergeijk\, René Uijlenhoet\, Gerard Pape\, Carla Scaletti\, Michael Longton\, Christopher Butterfield\, Andrew Schloss\, and Alex Dunn. His works have been presented internationally at events such as ICMC\, BEAST FEaST\, MA/IN\, SEAMUS\, Gaudeamus\, NYCEMF\, Sonorities Belfast\, Espacios Sonoros\, ACMA\, FIMNME\, Sound/Image London\, and the Kyma International Sound Symposium. He currently lives in the Netherlands and Germany\, working as an independent artist\, researcher\, and music software developer. \n  \nTom Bañados Russell: Dew\nDew is a concept piece built around a simple but flexible process that allows for great musical expression and freedom depending on the situation. It can be set up for a large variety of speaker setups and durations. The concept is based on a Haiku by Kobayashi Issa: This world of dew is a world of dew\, and yet\, and yet It focuses on change through repetition\, impermanence and the complex being built around the simple. While the piece could theoretically last for ever\, it must eventually end. \nAbout the artist\nTom Bañados Russell is a Chilean composer and electronic music performer. They completed a bachelor in composition at the PUC Chile\, with a following Master’s degree in the HMTM-Hannover in 2026. Their most recent work has focused on duos between an instrumental musician and live electronics. Their music has been performed by a variety of groups at festivals such as Klangbrücken and Impuls Academy\, by performers such as the Elision Ensemble. Among other accolades\, they received the Scholarship for Musical Excellence of the PUC and the Lower Saxony Scholarship for Innovative Composition. \n  \nFelipe Tovar-Henao: Fully Automated Luxury Music (selected tracks)\nTrack selection from the upcoming album\, “Fully Automated Luxury Music”: 3. caprice 6. waltz 8. nocturne Fully Automated Luxury Music (F.A.L.M) is an open-source\, generative music album. The music is written as\, and generated through\, stochastic algorithms—probabilistic\, rule-based processes designed to produce finely structured yet potentially infinite variants of a musical output\, in the form of audio files. The code is end-to-end (E2E)\, meaning it generates and assembles the entire album from scratch—in other words\, it’s a fully reproducible and parameterized work. This album serves primarily as a proof of concept for open-source music—a still recent and under-explored compositional practice (see\, for instance\, Pierre Cusa AKA Pure Code’s Ambient Garden Album)—and as a reflection on recent developments in AI automation\, what they mean for the future of artistic practice\, and how human expression can remain central to algorithmic design. The title is a wink and nod to Aaron Bastani’s popular book\, “Fully Automated Luxury Communism: A Manifesto”\, which offers a cautiously optimistic\, though increasingly unlikely\, utopian vision of technology’s impacts on society. \nAbout the artist\nFelipe Tovar-Henao is a US-based multimedia artist\, developer\, and researcher whose work explores computer algorithms as expressive tools for human creativity\, cognition\, and pedagogy. His music is often motivated by and rooted in transformative experiences with technology\, philosophy\, and cinema\, and it frequently focuses on exploring human perception\, memory\, and recognition. As a composer\, he has been featured at a variety of international festivals and conferences\, including TIME:SPANS\, the International Computer Music Conference\, the Mizzou International Composers Festival\, the Ravinia Festival\, the New York City Electroacoustic Music Festival\, WOCMAT (Taiwan)\, CAMPGround\, the Electroacoustic Barn Dance\, CLICK Fest\, the SCI National Conference\, the SEAMUS National Conference\, the Seoul International Computer Music Festival\, CEMICircles\, IRCAM’s CIEE Summer Contemporary Music Creation + Critique Program and ManiFeste Academy\, Electronic Music Midwest\, and the Midwest Composer Symposium. He has also been the recipient of artistic awards and distinctions\, including the SCI/ASCAP Student Commission Award and the ASCAP Foundation Morton Gould Young Composer Award. He is currently Assistant Professor of AI and Composition at the University of Florida. \n  \nKim Hedås: Lein\nLein is music that stems from the history of both organ music and electroacoustic music. Although these two fields have followed different paths through history\, they share some similarities\, not least through experiments that explore and expand both space and time. By listening backwards\, certain lines of origin can be transferred from the past to the present\, sometimes clear and recognisable\, sometimes distorted and fragmented. Microscopic units of rhythm form polyphonic lines as well as alloys of sound\, dynamically connecting what was previously unconnected. Lein is a multichannel fixed-media piece that has been performed at concerts and festivals in Sweden\, Germany and at New York City Electroacoustic Music Festival 2025. In June 2025\, Lein won two prizes at the international acousmatic composition competition at the Weimarer Frühjahrstage Festival in Germany: Second Prize and the Audience Award. \nAbout the artist\nKim Hedås\, born 1965\, Swedish composer and researcher\, PhD\, Professor of composition at the Royal College of Music in Stockholm (Kungliga Musikhögskolan) and a member of the Royal Swedish Academy of Music (Kungl. Musikaliska Akademien). \n  \nSean Peuquet: On the transparency of seeing through\nR. Murray Schafer pointed out in 1977 that our soundscape is increasingly lo-fi\, often the sound of traffic or\, especially at the Atlantic Center for the Arts where this piece was composed\, planes. While quiet is harder to come by\, there are wonderful new sounds too\, like the spray-paint can clicking of a hard-disk failure or powering on a belt sander. And yet\, we increasingly fetishize a return to not just natural soundscapes\, but the natural. Once we frame nature as being different (as a thing to return to)\, reality becomes an appearance of itself— obfuscating the naturalism of architecture\, pharmaceutics\, and software engineering under a guise of transparency. Are we ourselves not the nature to which we desire to return? In the “broken” appearance of this composition’s soundscape\, perhaps we can hear ourselves in relation to the natural world as\, echoing William Carlos Williams\, “touched but not held\, more often broken by the contact.” \nAbout the artist\nSean Peuquet is a composer and educator. He presents his work regularly at national and international venues for contemporary art and music such as ICMC (Limerick\, Daegu\, Shanghai\, Utrecht\, Ljubljana\, Belfast)\, SMC (Cyprus)\, NYCEMF\, TIES (Toronto)\, KEAMS (Seoul)\, Sines and Squares (Manchester\, UK)\, SEAMUS\, SCI\, EMM\, VU Symposium\, and more. In 2022\, Sean’s piece “Plane of Slight Elevation” (2021) was awarded Best Music: Americas by the ICMA. He has received numerous commissions for concert music\, installations\, and artist workshops at venues including Communikey (CMKY)\, The Ellie Caulkins Opera House\, and Museum of Contemporary Art (MCA) Denver. In 2020\, Meow Wolf commissioned Sean to compose an immersive and generative music and sound installation as part of their permanent Denver exhibition space\, Convergence Station\, opening to the public in 2021. Sean has been artist-in-residence at the Atlantic Center for the Arts in New Smyrna\, FL and ART 352 in Fort Collins\, CO. Sean is Dean of Art + Design and an Associate Professor at Rocky Mountain College of Art + Design in Denver\, CO. Prior to becoming Dean\, he served as Chair of the Music Production department at RMCAD for 5 years. Between 2015 to 2020\, Sean was the Program Director and Lead Music Instructor for the Madelife Creative Accelerator program in Boulder\, CO. \n  \nJuan Carlos Vasquez: The Eternalist Paradox\n“The Eternalist Paradox” is an 8-channel acousmatic piece recorded with a chromatic button accordion and live electronics. It explores the paradoxical realm of eternalism\, where past\, present\, and future coexist. Through an intricate interplay\, a Max application implemented processes from recordings sourced from diverse eras of creation\, intricately woven into a singular texture. This repurposed musical journey challenges conventional notions of time and invites the audience to contemplate the profound interconnections within the ever-flowing river of existence. \nAbout the artist\nDr. Juan Carlos Vasquez (www.jcvasquez.com) boasts a remarkable trajectory as an award-winning composer\, video game researcher\, and academic. His creations\, ranging from spatial audio works to immersive interactive experiences and game art\, have resonated across continents\, being featured in over 30 countries spanning the Americas\, Europe\, Asia\, and Australia. Dr. Vasquez is currently an Assistant Professor in Computation and Design at Duke Kunshan University \n  \nWen-Chia Lien: Unwritten Glow\nUnwritten Glow is an Acousmatic piece that illustrates how memories return in elusive and shifting ways. The “glow” evokes the lingering fragments that surface within us when we are remembering\, an inner brightness that is gentle\, persistent\, and never fully graspable. Memory changes each time it resurfaces\, it may become blurred\, clearer\, softened\, or quietly altered. Sounds return in new shapes\, much like moments that reappear unexpectedly and never quite as they once were. This piece is not about a story\, but a space where subtle memories drift in and out\, inviting listeners to follow their own past and find their own version of the “glow” in the unfolding sonic world. This piece views memory as a living\, shifting presence rather than a fixed archive of experience. \nAbout the artist\nWen-Chia Lien is a Taiwanese composer and sound artist whose creative practice spans instrumental\, electroacoustic composition\, film scoring\, and experimental theatre. Her works often engage with social issues\, historical events\, and cultural inquiry\, seeking to integrate music and technology as a medium for dialogue and reflection between sound\, space\, and audience. Wen-Chia is currently pursuing a Master of Music at the University of Toronto. She earned her Bachelor of Music in Music Theory and Composition from the University of Taipei in 2024. In recent years\, she has delved into multimedia creation and electronic music. In 2025\, she participated in ilSUONO Contemporary Music Week. Her orchestral work\, Scars\, received Third Prize in the 2024 Composition Competition of the National Taiwan Symphony Orchestra (NTSO) and was premiered by NTSO. Her electroacoustic piece In Our Stomach was selected to perform at the 2023 C-LAB Sound Festival: Diversonics\, and she was a selected visiting artist for the C-LAB × IRCAM Communication Program in the summer of 2024. In 2023\, she was the music designer for Skin Box\, a theatre and dance production that was presented at the Taipei Fringe Festival. Her artistic work has been recognised with several awards\, including the 2024 Taiwan Ministry of Education Study Abroad Scholarship and the 2025 University of Toronto France–Canada Experience Award. \n  \nTomás Koljatic S.: Vox Dei\nVox Dei is a multichannel acousmatic musical composition inspired by the sounds of the popular Feast of the Virgin of Guadalupe of Ayquina. This traditional Catholic celebration takes place annually on the eve of September 8th\, bringing together thousands of pilgrims in the heart of the Atacama Desert (Antofagasta Region)\, Chile. Based on field recordings I made in 2023 and 2024\, the piece explores the fervor\, devotion\, and unique soundscape of this festival\, where music\, dance\, and faith intertwine in a collective experience of celebration and sacrifice. The sound material for the work was captured at different moments of the feast: songs and prayers of the pilgrims\, the brass and percussion bands that accompany the religious dances (playing disparate\, overlapping music at full volume in close proximity)\, the voices of those arriving after long desert pilgrimages\, and the climactic moment of the celebration when thousands of devotees sing “Happy Birthday” to the Virgin Mary. This exceptionally rich sonic material is not subjected to extensive electroacoustic processing. Instead\, it is deployed to create an immersive experience that transports the listener to the heart of this festival and invites us to reflect on the power of sound as a vehicle for spirituality. \nAbout the artist\nTomás Koljatic S. is a Chilean composer. After studying music and mathematics in his country\, he continued his higher education in composition at the Paris Conservatory (CNSMDP)\, where he studied with professors Frédéric Durieux (composition)\, Claude Ledoux (analysis)\, Denis Cohen (orchestration)\, Luis Naón\, Tom Mays\, and Karim Haddad (new technologies). Simultaneously\, he completed advanced training in music technology at IRCAM (Cursus 1). Currently\, he works as a professor at UC | Chile Faculty of Arts\, teaching courses in music analysis and history. \n  \nDavid Nguyen: Whale Song Stranding\nInflections as sound process to sound quality\nEmanating otherness of the\nSound quality to sound process from the reflective \nResulting in an immersive rhizome-like sound world of the\nomnipresent of the dream like and the very literal \nAs different zones are successive\, simultaneous\, above\,\nbelow\, before\, and after\, to neither rise nor sink but only\nfloat \nA longing as the friction\, disputes of the literal and\ndream-like \nAnd \nA persistence of a pulse\, heavy\, through the literal as a\nconstant movement and the abstract ingenuous stillness\, a\nsound world of the discursive and the narrative \nChiastic process and quality is undermined as the\nreflections and inflections recur in rounded proportions.\nThe immersive and form is only tangible through this\ninsistence that is perceived as a dream occurring in\nreal-time \nFiguratively\nWhale Song suggests\, quite literally\, uncertainty that is \nStuck between the discursive and the narrative\,\nThe moving streams/waves and the pure tones surrounding\nwithin\,\nStranding \nAbout the artist\nDavid Quang-Minh Nguyen is an audio engineer\, sound designer/re-recording mixer\, and composer of concert music. His current interests lie in composing acousmatic works that explore multi-channel loudspeaker expansion\, various types of sound spatialization\, and immersive audio. \n  \nHanae Azuma: Where am I in the Universe?\nAn acousmatic piece\, “Where Am I in the Universe?” for 8 channels\, is remixed from the 16-channel version (original\, 2017). It was inspired by the poem “Two Billion Light-Years of Solitude” by the Japanese poet Shuntaro Tanikawa (1931 – 2024). Most of the harmonies in this piece are adapted from standard chords on the sho\, a Japanese free-reed musical instrument. \nHere is the abstract/my interpretation of the poetry:\nTwo billion light-years must show how enormous the universe is. The Earth is just part of the universe. Human beings on this small planet are tiny things in the universe. We might feel so lonely if we think about ourselves in the huge universe. But the very last sentence “I suddenly sneezed” makes me think about our real life and the comparison between the reality we face and the universe we could think/imagine. (Reference: TANIKAWA\, Shuntaro\, translated by William I. Elliot and Kazuo Kawamura\, (2008)\, “Two-Billion Light-Years of Solitude” SHUEISHA Inc.) \nThis poem also reminds me that I sometimes feel overwhelmed that I am just one of the people on Earth. I sometimes feel fear that I might be the only person in the world/the universe because nobody can see in my mind. On the other hand\, I know I am living surrounded by people. I’m trying to show this comparison and the solitude that people might have in this piece. \nAbout the artist\nHanae Azuma is a composer from Tokyo\, Japan\, completed both her BM and MM at Tokyo University of the Arts\, Department of Musical Creativity and the Environment. During her studies in Japan\, she mainly concentrated on the relationship between music and other visual/performing arts such as dance and films and has been collaborating with contemporary dancers on various projects as a composer. She also completed her MM of music technology at New York University in 2014. Her works have been presented at music festivals and concerts in the United States\, Japan\, Korea\,Taiwan and so on. She is currently an academic fellow at Acoustic Lab\, Tokyo University of the Arts. \n  \nJong Gyun Kim: Droplet\nThis work is an artificial ambient soundscape centered on the sonic textures of water droplets. By integrating actual recordings of falling water with textures reconstructed through heterogeneous materials such as PET bottles\, the piece juxtaposes natural and synthetic audio elements. It aims to capture the organically evolving rhythms and textures of liquids\, while establishing a three-dimensional sense of perspective within the soundscape through the manipulation of auditory distance and variations in textural density. \nAbout the artist\nJong Gyun Kim is a South Korean composer specializing in electronic music. He earned his undergraduate degree from Senzoku Gakuen College of Music under Takeyoshi Mori\, transitioning from a classical music background. His artistic portfolio includes presentations at CCMC in Japan\, ICMC and NYCEMF. Currently\, he is continuing his research and composition as a Master’s student under Richard Daniel Dudas at the Graduate School of Music\, Hanyang University. \n 
URL:https://icmc2026.ligeti-zentrum.de/event/listening-room-1-3/
LOCATION:Hamburg University of Technology\, Building A (A 0.18)\, Am Schwarzenberg-Campus 1\, Hamburg\, 21073\, Germany
CATEGORIES:13-05,Listening Room,Music
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260513T110000
DTEND;TZID=Europe/Amsterdam:20260513T180000
DTSTAMP:20260513T234817
CREATED:20260421T094018Z
LAST-MODIFIED:20260421T095138Z
UID:10000140-1778670000-1778695200@icmc2026.ligeti-zentrum.de
SUMMARY:Installation | Miles Friday: "Breathwork"
DESCRIPTION:Breathwork is a twelve-channel sound installation where loudspeakers become breathing bodies. Each loudspeaker is encased in an inflatable bag that swells and contracts in response to low-frequency drones\, forming a slow\, ever-shifting breath-like choreography. \nWithin this field of motion\, clouds of layered just intonation partials drift in and out of perception\, while low frequencies create a base of acoustic beating and Shepard tone-esque glissandos. By transforming the loudspeaker into a pneumatic pump\, Breathwork reimagines the loudspeaker as a tool for visual synthesis\, where vibrations in the air animate inflatables as kinetic sculptures—synthetic lungs whose movement create polyrhythms that can be both seen and heard. \nAll audio is generated live via SuperCollider and is running on two Bela Mini Multichannel Expanders. \nAbout the artist\nMiles Jefferson Friday is an artist who focuses on sound as his primary medium. Building new instruments\, composing music\, designing sound sculptures\, and creating immersive installations\, his practice invites us to reconsider how we hear and listen. Miles is currently an Assistant Professor of Digital Music at University of Texas at San Antonio\, holds a DMA and MFA from Cornell University\, and an MA from the Eastman School of Music. \n 
URL:https://icmc2026.ligeti-zentrum.de/event/installation-miles-friday-breathwork-3/
LOCATION:Hamburg University of Technology\, Building A (Foyer)\, Am Schwarzenberg-Campus 1\, Hamburg\, 21073\, Germany
CATEGORIES:13-05,Installation
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260513T110000
DTEND;TZID=Europe/Amsterdam:20260513T180000
DTSTAMP:20260513T234817
CREATED:20260421T095604Z
LAST-MODIFIED:20260508T114805Z
UID:10000137-1778670000-1778695200@icmc2026.ligeti-zentrum.de
SUMMARY:Installation | Alessandro Anatrini & Alessandro Aresta: "Faulty Oracle"
DESCRIPTION:Faulty Oracle is an adaptive audiovisual installation that conjures a gloriously unreliable divinatory machine. Visitors pose questions through body language: gestures\, movements\, postures which the system interprets\, misreads\, and willfully transforms. In return\, the oracle delivers cryptic animated answers\, flickering between epiphany\, nonsense\, and hallucination. Voices stretch\, fracture\, and echo over visuals that shimmer with unstable symbols\, offering responses that feel both prophetic and utterly broken.\nThe dialogue is a masterclass in miscommunication: questions are misinterpreted\, wrong ones are amplified\, and answers rarely align with intent. The oracle becomes a mirror of ambiguity\, where meaning emerges from error\, chance\, and interpretation rather than clarity.\nBy shifting interaction from language to the body\, Faulty Oracle gleefully dismantles any expectation of precision in human-machine exchange. It invites participants into a space of playful fallibility\, reframing prophecy as a dance of uncertainty and imagination. \nAbout the artists\nAlessandro Anatrini (1983) is a composer\, new media artist\, and developer with a background in musicology\, composition\, and electronic music. Completed a M.A. in multimedia composition at HfMT Hamburg and a PhD in artistic research focused on machine learning in adaptive multimedia environments. His work has\nbeen presented by Ensemble Intercontemporain\, Klangforum Wien\, Symphoniker Hamburg and at festivals including Manifeste\, HCMF\, Impuls\, and Blurred Edges. Frequently invited to speak at conferences such as SMC\, TENOR\, and AIMC. Collaborates with institutions like UdK Berlin and the Digital Stage Foundation. Lecturer on machine learning topics at HfMT since 2018\, from 2024 he is Professor of Multimedia at the Conservatorio of Piacenza (Italy). \nAlessandro Aresta \n 
URL:https://icmc2026.ligeti-zentrum.de/event/installation-alessandro-anatrini-faulty-oracle-3/
LOCATION:Hamburg University of Technology\, Building A\, Videospace I (A 1.27)\, Am Schwarzenberg-Campus 1\, Hamburg\, 21073\, Germany
CATEGORIES:13-05,Installation
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260513T110000
DTEND;TZID=Europe/Amsterdam:20260513T180000
DTSTAMP:20260513T234817
CREATED:20260421T100042Z
LAST-MODIFIED:20260423T171630Z
UID:10000155-1778670000-1778695200@icmc2026.ligeti-zentrum.de
SUMMARY:Installation | Dahye Seo: "Unscored"
DESCRIPTION:A camera installed on a balcony captures the live sky\, converting it into generative sound in real time. The trajectories of birds crossing the frame are translated into piano tones\, forming unpredictable melodies. The time spent watching the sky—waiting for the next sound—becomes part of the work. \nAbout the artist\nDahye Seo (b. 1985\, South Korea) is a multimedia artist based in Berlin. She explores the movement of living organisms and environmental phenomena through sound\, data\, and interactive installations\, creating immersive experiences that bridge perception and natural patterns. \n 
URL:https://icmc2026.ligeti-zentrum.de/event/installation-dahye-seo-unscoredt-3/
LOCATION:Hamburg University of Technology\, Building A\, Videospace II (A 2.34)\, Am Schwarzenberg-Campus 1\, Hamburg\, 21073\, Germany
CATEGORIES:13-05,Installation
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260513T110000
DTEND;TZID=Europe/Amsterdam:20260513T180000
DTSTAMP:20260513T234817
CREATED:20260421T191005Z
LAST-MODIFIED:20260511T124854Z
UID:10000192-1778670000-1778695200@icmc2026.ligeti-zentrum.de
SUMMARY:Installation | Pasquale Savignano: "SURROUNDINGS"
DESCRIPTION:SURROUNDINGS is a site-specific sound installation that transforms walking\, listening\, and spatial memory into a continuously unfolding sonic environment. The work is built from GPS-tracked field recordings captured through attentive movement in and around the exhibition site. These recordings are not presented as documents of place\, but as living material that is reactivated\, displaced\, and rewoven within the space itself.\nSix loudspeakers are distributed across the site\, defining a navigable field rather than a fixed listening position. Sound moves between them following the original trajectories of the recorded walks\, scaled and reoriented to fit the architecture or landscape of the installation. The visitor’s experience emerges from this superposition of paths: multiple sonic traces coexist\, intersect\, expand\, and dissolve\, producing a dynamic impression of the surroundings rather than a literal representation.\nThe installation focuses primarily on keynote sounds – the persistent acoustic textures that shape everyday environments – rather than on spectacular or foregrounded events. Through subtle processing derived from convolution\, filtering\, and granular techniques\, these sounds are stretched and smoothed\, allowing their timbral essence to surface while remaining closely tied to their original context. At times\, lightly processed documentary sounds emerge\, blurring the boundary between the audible present and the remembered past.\nListening unfolds through movement. Visitors are free to walk\, pause\, or circle the space\, allowing their perception to shift between the installation\, the actual soundscape\, and their own internal listening. The work does not impose a narrative or a fixed duration; instead\, it offers a continuous temporal flow that mirrors the rhythms of walking and environmental change.\nVisually\, the installation remains restrained and functional. Loudspeakers and cabling are integrated into the space in a manner that suggests infrastructure rather than spectacle\, reinforcing the idea of sound as an environmental layer rather than an object. The result is a non-disruptive intervention that operates near the threshold of audibility\, encouraging a heightened awareness of place.\nSURROUNDINGS proposes listening as a form of situated knowledge: an embodied practice through which space is not only perceived\, but actively composed. \nAbout the artist\nPasquale Savignano (1/5/1994) is a sound artist and composer working with field recordings and digital sound processing to explore the boundaries of physical and sonic space in various fields: electroacoustic music\, improvisation\, video art\, sound and multimedia installations. His research moves mainly between the flux of\nrelationships and interferences in the soundscape. His works have been presented internationally in galleries\, public spaces and festivals\, such as Angelica International Music Festival\, Pulsar Festival\, Echoes Around Me\, Festival di Nuova Consonanza\, Tempo Reale Festival\, Xing\, Archivio Aperto\, ArtCity\, Hyperlocal Festival\, Experimance Festival\, ICMC\, ToListenTo\, and others. He has collaborated and performed with artists such as Alvin Curran\, Francesco Giomi\, Elio Martusciello\, Maria Hassabi\, Elvin Brandhi\, Jacopo Benassi\, Marcello Maloberti\, Daniela Cattivelli\, Alessandro Bosetti\, Muna Mussie\, Francesco Cavaliere and many\nothers. He is currently pursuing a PhD in Sound&Music Computing and Cultural Heritage at Conservatorio di Musica G. Verdi di Torino and collaborates with Xing (Bologna – IT) and Marcello Maloberti Studio (Milano – IT). \n 
URL:https://icmc2026.ligeti-zentrum.de/event/installation-pasquale-savignano-surroundings-3/
LOCATION:Hamburg University of Technology\, Outdoor Area I\, Am Schwarzenberg-Campus 1\, Hamburg\, 21073\, Germany
CATEGORIES:13-05,Installation
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260513T110000
DTEND;TZID=Europe/Amsterdam:20260513T180000
DTSTAMP:20260513T234817
CREATED:20260421T191718Z
LAST-MODIFIED:20260427T091512Z
UID:10000195-1778670000-1778695200@icmc2026.ligeti-zentrum.de
SUMMARY:Installation | Bill Parod & Teresa Parod: "The Elephants of Trianon"
DESCRIPTION:The Elephants of Trianon is an augmented-reality audiovisual installation that extends a series of public murals into an interactive spatial sound environment. The original work consists of ten adjacent murals painted on garage doors in a public alley in Evanston\, Illinois\, USA. These form part of a larger international body of public work by the artist\, Teresa Parod. For the International Computer Music Conference\, the project is presented as a free-standing installation at TU Hamburg-Harburg using large construction-fence banners which approach the full-size of the garage door murals. \nUsing a custom mobile app\, visitors’ devices recognize each mural and anchor a corresponding three-dimensional audiovisual scene in space. As visitors move through the installation and activate additional murals\, their scenes accumulate and blend\, creating a continuously evolving environment\, rather than a sequence of isolated works. The installation therefore functions as a spatial composition shaped by listener movement\, attention\, and duration of engagement. \nThe soundscape combines field recordings made in Bali\, New Orleans\, and Chicago with instrumental layers and voices in ten languages. Animated three-dimensional forms—birds\, bats\, dogs\, elephants\, rabbits\, and celestial figures—appear among the murals\, along with subtle video textures and custom shaders that bring painted elements into motion. Some virtual elements are not confined to a single mural but move throughout the installation space\, responding to the physical layout and dimensions of the exhibition environment. \nThe project suggests a scalable model for mobile\, spatially responsive sound installations in galleries and public spaces. The software framework and mobile application used in The Elephants of Trianon have been developed through prior public installations and gallery presentations and are designed to function across a range of exhibition formats\, from outdoor murals to indoor projection and free-standing display structures. The ICMC installation demonstrates how augmented reality can be used not only as a visual medium\, but as a platform for spatial audio composition and listener-driven musical form. \nAbout the artists\nBill Parod (b. 1954\, Chicago USA) is a composer\, improviser (violin)\, and software developer who works on interactive spatial music\, audio poetry\, image reactive augmented reality\, and living music mobile apps. His work has appeared in Chicago at Elastic Arts\, Experimental Sound Studio\, and the Jay Pritzker Pavilion; Burning Man\, Nevada USA; New York University NYC\, and Ircam in Paris\, France. \nTeresa Parod (b. 1957\, Alton IL\, USA) paints vibrant\, luminous oil paintings and murals\, celebrating life through dichotomies such as light and shadow\, warm and cool and complementary colors. Her landscapes invoke mythological destinations inviting the viewer to journey there.\nShe has created over one hundred works of public art in the United States\, Cuba\, Bali\, Nepal\, and Istanbul. In Cuba\, she was honored to work with mosaicist José Fuster\, whose work inspired her creation of art in unexpected and underused spaces.\nShe lives in Evanston\, Il with her husband\, Bill Parod. Together they have collaborated on several exhibitions and performances and multichannel visual and musical art.\nShe also teaches art history at Oakton college\, does an annual century bike ride and studies and performs classic Indonesian dance. \n 
URL:https://icmc2026.ligeti-zentrum.de/event/installation-bill-parod-teresa-parod-the-elephants-of-trianon-3/
LOCATION:Hamburg University of Technology\, Outdoor Area II\, Am Schwarzenberg-Campus 1\, Hamburg\, 21073\, Germany
CATEGORIES:13-05,Installation
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260513T110000
DTEND;TZID=Europe/Amsterdam:20260513T180000
DTSTAMP:20260513T234817
CREATED:20260421T192603Z
LAST-MODIFIED:20260504T143617Z
UID:10000243-1778670000-1778695200@icmc2026.ligeti-zentrum.de
SUMMARY:Installation | Tina Tallon: "yammer"
DESCRIPTION:yammer is an interactive audio installation and performance environment that questions the ambiguities and limitations inherent in attempts to describe and represent music and other complex human expressive sonic events using commonplace ontologies in audio classification systems and large language models. Live audio produced by visitors to the installation undergoes audio classification using YAMNet\, and an immersive soundscape is created by combining the live audio input with playback and processing of members of the AudioSet dataset belonging to the same putative audio event classes\, often to humorous and nonsensical ends. Ultimately\, yammer entreats those engaging with the installation to question not only the datasets used in audio classification\, but also the datasets underlying many other models with which they may engage on a daily basis. Additionally\, it questions the artistic utility of text-to-sound and text-to-music models\, and the role of embodied cognition in musical artificial intelligence. \nAbout the artist\nTina Tallon. Winner of the 2022 Rome Prize in Composition\, Tina Tallon is a creative technologist and composer exploring AI’s impact on art and society. Her music and installations have been presented by leading ensembles and presenters worldwide\, from the LA Philharmonic to the Venice Biennale and NeurIPS. She has earned honors from institutions such as Harvard\, MIT\, the American Academy in Rome\, the Barlow Endowment\, and ASCAP. Tallon is Assistant Professor of AI and Composition at Ohio State University. \n 
URL:https://icmc2026.ligeti-zentrum.de/event/installation-tina-tallon-yammer/
LOCATION:Hamburg University of Technology\, Building N (Foyer)\, Eißendorfer Straße 40\, Hamburg\, 21073\, Germany
CATEGORIES:13-05,Installation
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260513T113000
DTEND;TZID=Europe/Amsterdam:20260513T143000
DTSTAMP:20260513T234817
CREATED:20260421T120942Z
LAST-MODIFIED:20260423T174800Z
UID:10000165-1778671800-1778682600@icmc2026.ligeti-zentrum.de
SUMMARY:Workshop | Moritz Wesp\, Eric Haupt and Victor Gelling: oscheat
DESCRIPTION:oscheat is a work-in-progress multi-user interface based on OSC. Its purpose is to simplify and formalise shared\, real-time control of musical parameters across an ensemble. Instead of separating instruments by performer\, oscheat functions as a collective parameter space in which all participants can change each other’s instrument sound generation\, spatialization\, tonal systems or rhythmic structures.\nFor the workshop\, a predefined set of addressable instruments has been prepared for each instance of oscheat. They are structured into three functional sections reflecting core Musical building blocks: synthesizers for melodic and harmonic material\, sequencers for rhythmic organization\, and samplers for vocal and sound-based material. Additional functionality includes real-time MIDI recording and looping\, pitch mapping with support for alternative tunings\, spatialization\, and global macro controls for large-scale structural manipulation.\nFollowing a short system introduction\, participants engage in practical structured improvisation exercises exploring the capabilities of oscheat. In these scenarios\, they explore how shared control affects thematic development\, synchronicity\, polyphony\, and formal coherence in a networked music performance. The workshop examines how shared control reshapes authorship\, musical responsibility\, and aesthetic decision-making within an ensemble\, and which new possible music making strategies are emerging from such a system. \n  \nRequirements\nNo prior knowledge required; useful but not required is having some experience with playing improvised music and with sound synthesis;\nattendees can bring their own laptop to install a demo version of oscheat for local testing.\nThe demo version is available here. \n  \nWorkshop registration\nPlease register via Pretix in order to participate in the workshop. There are no additional costs.  \n  \nAbout the workshop facilitators\nMoritz Wesp lives in Cologne (GER) and plays trombone\, virtual trombone and other instruments that he designs\, programs and builds. As an improviser he is working with different ensembles like Mariá Portugal Erosao\, Matthias Muche’s Bonecrusher or Simon Rummel. Besides this he composes music and is part of the Audio-VR project Sona.\nMore about Moritz here. \nEric Haupt is a guitarist and composer working in experimental music and punk. He completed his Bachelor of Music at the HfMT Cologne in 2018. He is a founding member of the ensembles Now My Life Is Sweet Like Cinnamon and Lawn Chair\, as well as the initiator of the experimental game-show performance Sport1. His Music has been presented at festivals throughout Europe and collaborations include internationally renowned producers Olaf O.P.A.L. and Chris Coady. His punk compositions have been broadcast on international Radio stations such as BBC Radio 6 Music \nVictor Gelling is an improviser and composer who uses stringed instruments including but not limited to upright bass\, tenor banjo\, Pedalsteel- and Nonpedalsteel-Guitars in addition to pedals\, synthesizers and barely working self-coded computer programs to create sounds. Their work spans genres from jazz to noise to electric Cowboy songs to complex music\, which culminates in their large ensemble works with Trash & Post-Chaotic Music\, their alt-country/post-punk alias Slowklahoma\, solo works or their playing in the Jorik Bergman Trio.\nMore about Victor here. \nTogether they are forming the ensemble Now My Life Is Sweet Like Cinnamon and are working together in the interdisciplinary Gameshow project Sport1. \n 
URL:https://icmc2026.ligeti-zentrum.de/event/workshop-moritz-wesp-eric-haupt-victor-gelling-oscheat/
LOCATION:ligeti center\, Production Lab (10th floor)\, Veritaskai 1\, Hamburg\, 21079\, Germany
CATEGORIES:13-05,Workshop
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260513T120000
DTEND;TZID=Europe/Amsterdam:20260513T190000
DTSTAMP:20260513T234817
CREATED:20260415T121027Z
LAST-MODIFIED:20260513T090328Z
UID:10000119-1778673600-1778698800@icmc2026.ligeti-zentrum.de
SUMMARY:[Off-ICMC] Interactive Installation | Anouk Kellner: "Airchoir inter/reactive"
DESCRIPTION:Photo: Ethan Cannaert\n  \nImportant notice: Due to the current weather forecast\, the installation has been moved to Hamburg University of Technology\, Building C. \n  \nThe interactive sound installation Airchoir consists of eight inflatable figures that breathe in and out like living lungs. Their voices are heard through the organ pipes to which they are connected. Like a choir\, they come together to form a multi-layered soundscape that constantly changes with the movement of the visitors.  \nNo registration required \n  \nThe Off-ICMC\nMusic is what brings us together\, even when everything else pulls us apart.\nMusic everywhere—it is part of our everyday lives. And yet\, we’re hearing it performed live on analog instruments less and less. Instead\, it often reaches us through speakers or headphones\, as files\, from the cloud. What does music mean to you? What does it sound like today? Where does it begin—and where does it end?\nThe ligeti center invites you to listen more closely and discover new sounds—to explore\, experiment\, and play. This year\, ICMC HAMBURG 2026 revives an old tradition: the Off-ICMC\, a free and accompanying festival curated for the general public and anyone curious about computer music. \nAll Off-ICMC events are free of charge.  \n\n  \n \n 
URL:https://icmc2026.ligeti-zentrum.de/event/off-icmc-interactive-installation-anouk-kellner-airchoir-inter-reactive/
LOCATION:Hamburg University of Technology\, Building C\, Am Schwarzenberg-Campus 4\, Hamburg\, 21073\, Germany
CATEGORIES:13-05,Installation,Off-ICMC
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260513T133000
DTEND;TZID=Europe/Amsterdam:20260513T153000
DTSTAMP:20260513T234817
CREATED:20260421T161440Z
LAST-MODIFIED:20260512T080354Z
UID:10000085-1778679000-1778686200@icmc2026.ligeti-zentrum.de
SUMMARY:Lunch Concert 3A
DESCRIPTION:Concert 3A offers a fascinating stage for the Steinway Spirio—the world’s most advanced self-playing piano system. In this session\, the piano is taken far beyond its traditional role: it acts as an autonomous performer\, a controller\, and even an interface for human brain activity. \nThis Lunch Concert is open to the public. Those without a conference pass can purchase a ticket here. \n  \nProgram Overview\n“Empathic Machines” for One Pianist’s Mind and Steinway & Sons SPIRIO\nMasatsune Yoshio and Atsushi Mori\nPiano: Atsushi Mori \nMulholland Revisited \nHeloise Garry \nUsher\nJeffrey T.V. \nSpring Code \nJian Feng\nHarp: Armand Brunet (Ensemble 404) \nVoici que la saison décline\nMikako Mizuno\nClarinet: Anyu Lyu (Ensemble 404) \nElevator Pitch\nJuan Vassallo\nCello: Antonio Lo Curto (Ensemble 404) \nChant\nYoonjae Choi\nCello: Antonio Lo Curto (Ensemble 404) \n  \nAbout the pieces & artists\nMasatsune Yoshio: Empathic Machines\nWhat lies beyond the pianist’s technical skill – music in which body and mind are fully integrated.\nIn this work\, the pianist’s brainwaves are detected using the FocusCalm™ device together with the Good Brain app\, which enables UDP measurement. The data is then processed in Max 9 and Somax2 to generate performance information\, which is transmitted to and played by the Steinway & Sons SPIRIO self‑playing piano.\nThrough this body‑extended form of expression\, a kind of piano music emerges that cannot be reached by human hands alone\, offering a speculative answer to the question posed at the beginning. \nAbout the artists\nMasatsune Yoshio (1972- ) was born in Kobe. He is a composer and Media Master No. 75. His specialty is the composition of fine art pieces using computers and the compositions are based on the creation of and research regarding algorithmic compositions\, acoustic synthesizing\, live electronics\, and expression with information technologies. His electroacoustic pieces were performed within and outside of Japan. He is an associate professor at Showa University of Music. \nPiano: Atsushi Mori\nAtsushi Mori is an Associate Professor at the Junior College Division of Showa University of Music.He completed his studies in the Department of Composition and the Graduate School at Showa University of Music\, studying under Kazuhisa Akita.\nIn 1987\, he received the Silver Prize in the A1 Category of the PTNA Piano Competition\, and in 1993\, he performed with the Warsaw Philharmonic as part of the Yamaha JOC overseas concert tour. He composed Fanfare for the “Festival of Student Orchestras” in 2002.\nIn addition to his work as a composer\, Mori is active as a keyboardist\, providing live support\, arrangements\, and recordings. He also specializes in music production using DAWs such as Ableton Live and Logic\, and is dedicated to the analysis of popular music and the development of solfège teaching materials. His research focuses on the integration of digital technology and music education. \n  \nHeloise Garry: Mulholland Revisited\nMulholland Revisited is an interactive composition for Yamaha Disklavier / MIDI keyboard and ChucK\, integrating real-time interaction between acoustic and electronic elements. By leveraging MIDI input\, the piece enables the piano to function as both a performer and a controller\, triggering ChucK-generated sound textures in response to live performance. Inspired by a pivotal phone conversation in Mulholland Drive (Lynch\, 2001)\, the work explores the blurred boundary between dream and reality through a dynamic interplay between piano-generated material and algorithmic sound synthesis. The electronic elements emerge as an extension of the piano’s acoustic voice\, reinforcing the psychological tension that defines the narrative arc. An homage to David Lynch\, the piece mirrors his fascination with fractured identities and surreal atmospheres\, immersing the listener in a sonic landscape that expands the piano’s traditional interface into new musical and narrative dimensions. \nAbout the artist\nHéloïse Garry is an artist working at the intersection of filmmaking\, theater\, and performance\, exploring the aesthetics of totality across art forms. Her compositions reflect a deep interest in cross-cultural and linguistic experimentation\, and sonic storytelling. Her work has been presented at ICMC\, NIME\, NYCEMF\, ICAD\, Audio Mostly\, the Audio Engineering Society\, and the Internet Archive. As a Yenching Scholar at Peking University\, she researched the politics of independent Chinese cinema and the role of music in the films of Jia Zhangke. An artist-in-residence at Gray Area and the Mozilla Foundation in San Francisco\, she has collaborated with IRCAM and the Columbia Computer Music Center\, and explored the sonification of the universe under the mentorship of physicist Brian Greene. In September 2024\, she joined Stanford’s Center for Computer Research in Music and Acoustics (CCRMA)\, where she studies with Mark Applebaum\, Paul DeMarinis\, and Ge Wang. Héloïse holds bachelor’s degrees in Filmmaking\, Economics\, and Philosophy from Columbia University\, Sciences Po\, and Sorbonne University. \n  \nJeffrey T.V.: Usher\nUsher is a new soundtrack for the 1928 silent film The Fall of the House of Usher\, co-directed by J.S. Watson and Melville Webber and based on the 1839 short story by Edgar Allen Poe. The primary goal of this electronic score was to enhance both the dramatic content of the film and emphasize the surrealist imagery laden throughout. Through the use of modular synthesizers\, this resulted in a piece existing between filmscore and audio-visual composition. \nAbout the artist\nJeffrey T.V. is a New England-based electroacoustic composer and classically trained vocalist. His compositional output primarily deals in combining generative sound withimprovised response through combinations of electronic and acoustical instruments\, with a special interest in modular synthesizers. His music has been featured at Electronic Music Midwest\, SEAMUS\, NYCEMF\, ICMC\, Salisbury University\, Bucknell University\, the University of Kentucky Art Museums\, and other venues across the United States.  \n  \nJian Feng: Spring Code\nSpring Code is a real-time interactive audiovisual work that revives the konghou (Chinese harp)—once lost for centuries—through a custom responsive interface. Treating classical poetic aesthetics as a generative source\, it reimagines Wang Wei’s line “Clear spring flows over stones” not as an illustration\, but as executable logic: a living data stream shaped by performance.\nThe konghou functions simultaneously as an instrument and an expressive interface. Its acoustic output—plucks\, harmonics\, string vibrations—is captured via a microphone\, while performer gestures are tracked through laser distance\, pressure\, and sliding touch sensors. All inputs are fed into an integrated system built on Max/MSP\, Arduino\, and TouchDesigner\, driving real-time granular synthesis\, adaptive spatialization (VBAP)\, dynamic visuals\, and responsive light from addressable LED strips.\nThe resulting soundscape evokes the fluidity of mountain streams; its visual layer maps audio features to flowing particles\, creating a multimodal environment where cultural memory is continuously re-encoded. “Spring” embodies nature’s flow; “Code” pulses as digital lifeblood. Rather than preserving tradition as an artifact\, Spring Code compiles it anew in every performance—where hand gestures conduct light and data\, and konghou tones shape space and sound.\nBetween the echo of a mountain spring and the pulse of an algorithm\, the work constructs an inexhaustible river of resonance across time. In Spring Code\, the spring never dries—the code never stops flowing. \nAbout the artists\nJian Feng is a composer and Associate Professor at the Wuhan Conservatory of Music\, where she serves as Director of the Center for Computer Music Composition Research. She was a visiting scholar at the Center for New Music and Audio Technologies (CNMAT) at the University of California\, Berkeley\, supported by the China Scholarship Council.\nHer creative and research practice centers on interactive electronic music and the application of artificial intelligence in musical contexts. Her works have been presented at leading international forums and festivals\, including the International Computer Music Conference (ICMC)\, the International Society for Contemporary Music (ISCM) World New Music Days\, Frontier+ Festival (UK)\, MUSICACOUSTICA-Beijing\, MUSICACOUSTICA-Hangzhou\, and the Shanghai International Electroacoustic Week.\nFeng holds key roles in China’s interdisciplinary arts–technology community: Deputy Secretary-General of the Electronic Music Society of the Chinese Musicians Association\, Committee Member of the Art & Artificial Intelligence Specialized Committee of the Chinese Association for Artificial Intelligence (CAAI)\, and Executive Committee Member of the Computational Arts Division of the China Computer Federation (CCF). \nHarp: Armand Brunet (Ensemble 404) \n  \nMikako Mizuno: Voici que la saison décline\, for clarinet and electronics\nThe electronic part of this piece comprises sound files containing grains of different pitches and sizes\, all of which are derived from clarinet performance. These grains are placed in the field by spat. program and diffused through a cube-shaped multi-channel system. The subscribed version is rendered into four channels. The solo clarinet is required to produce special tone colours using multiphonic techniques\, breath tones\, harmonic colour trills\, etc. The subtle timbre of the instrument connects the minute changes in visual colours and the passing of time\, which were depicted in a poem by Victor Hugo.\nThe title of this piece comes from one of Hugo’s poems. At the end of summer\, the season seamlessly transitions to autumn. The bright blue sky turns grey\, the birds shiver and the grass feels cold. I tried to create sounds that reflect these slight changes and delicate nuances.\nThe clarinet’s multiphonic sound is enhanced by harmonised breath tones. The harmonisation\, realized by special signal processing\, involves not only layered pitches\, but also the filtering of noisy long breaths. In the performance\, especially in the latter half of the piece\, Max for Live is necessary to certify the effective interactive ensemble between the clarinet player and the electronic part\, which must fulfil the notated musical ensemble. The instrumentalist can play the piece according to the usual musical notation\, because some notated guides in the electronic part show the tempo and the nuance of phrase for the musician\, which are often the case in the latter half of this piece. The instrumentalist is sometimes demanded to catch the electronic un-pitched noisy sounds during the fermata or the rest. \nAbout the artists\nMikako Mizuno. Composer/Musicologist. Mainly active in Japan\, her music has been heard in many places including France Germany\,Austria\, Hungary\, Italy\, Republic of Moldova\, and international festivals and conferences such as ISEA\, ISCM\, EMS\, Musicacoustica\, WOCMAT\, NIME\, ICMC\, NYCEMF. Her pieces range from orchestra\, chamber music\, vocal ensemble\, traditional Japanese instruments (sho\, koto\, shakuhachi\, no-flute\, biwa etc.) to networked remote performance through ipv6. \nClarinet: Anyu Lyu (Ensemble 404) \n  \nJuan Vassallo: Elevator Pitch\nPhilosopher Hartmut Rosa suggests that our society is characterized by acceleration due to rapid technological advancements\, leading to constant time shortages. As we adapt to quick updates via smartphones and social media\, communication becomes faster and more fragmented\, favoring brief\, direct forms like the elevator pitch. An elevator pitch is a short summary speech meant to convey ideas or products within the duration of an elevator ride. It is aimed at being clear and persuasive to a wide audience.\nIn politics\, new communication techniques exploit these brief\, impactful messages\, often oversimplifying complex issues and lacking depth. Such strategies have been criticized for manipulating public opinion and stirring emotions\, leading to biased and divisive rhetoric that can aid authoritarian or intolerant movements.\nThe piece poses an artistic focus on these contemporary methods of communication -such as an elevator pitch- and the potential for manipulation of sound-bite content by political figures. The piece thus is a sardonic analogy to a political speech\, which is portrayed here as empty of substance\, and as a construct derived from a carefully crafted algorithmic rhetoric\, and the sonification of spoken phrases. Additionally\, nonsensical political speeches synthesized through commercial text-to-speech systems are used as sound material for the electronics. \nAbout the artists\nJuan Sebastián Vassallo is an Argentinian composer and live-electronics performer based in Bergen\, Norway. He holds a Ph.D. in Artistic Research from the University of Bergen. His artistic research explores human–computer interaction in art creation\, at the intersection of computer-assisted composition\, artificial intelligence\, algorithmic poetry\, generative visuals\, and live electronics. \nHis music has been performed internationally by ensembles and soloists including Projecto RED (Argentina)\, Quasar Saxophone Quartet (Canada)\, Hinge Quartet (USA)\, Vocal Ensemble Tabula Rasa (Norway)\, Edvard Grieg Kor (Norway)\, JÓR Saxophone Quartet (Scandinavia)\, Zone Experimental Basel (Switzerland)\, and Lucas Fels (Germany)\, among others. \nHis work has received multiple awards\, including first prize at the AI-based composition contest at the IEEE Conference on Big Data (Washington\, D.C.) for Oscillations (iii). Other distinctions include selections and awards from the National Endowment for the Arts (Argentina)\, ISCM/Chengdu River Sun Prize (China)\, and several contemporary art competitions. \nHe has received international grants from UNESCO-Aschberg and the Organization of Ibero-American States (IBERMÚSICAS)\, supporting artistic residencies in the United States. His practice is strongly collaborative and interdisciplinary\, and alongside his experimental work\, he maintains an active career as a tango pianist and arranger. \nCello: Antonio Lo Curto (Ensemble 404) \n  \nYoonjae Choi: Chant\nChant is a live electronic work that transforms the cello through vowel-based formant processing\, creating a hybrid vocal–instrumental language reminiscent of primordial voice. As part of a broader research project on real-time live electronics formant synthesis\, the piece explores how electronic modulation can expand instrumental identity and shape emotive\, multi-voiced textures. \nAbout the artists\nYoonjae Choi is a South Korean composer whose work explores the musical potential of extended tones and spectral qualities drawn from both traditional instruments and non-instrumental materials. His compositional practice focuses on integrating acoustic sound with live electronics\, soundscapes\, and computer-based technologies. He frequently collaborates across media arts and experimental music disciplines. \nHe studied with Richard Dudas at Hanyang University and with John Gibson and Chi Wang at Indiana University. He is currently pursuing a doctoral degree in composition at the University of North Texas\, studying with Panayiotis Kokoras. His music and research have been featured at international conferences and festivals. \nCello: Antonio Lo Curto (Ensemble 404) \n  \nVolunteers\nTechnical Director / Main Sound\nSteffen Lohrey\nLeon Sudahl \nSound Assistants\nJakob Seyberth\nTim Christiansen \nStage / Light / Video\nEvelin Lindberg\nDong Zhou\nJames Tsz-Him Cheung \nProduction\nAigerim Seilova\nHuixin Xue\nHaonan Guo\nXinyi Yang\nNiko Yin\nJiwon Seo\nMenghuan Feng \n 
URL:https://icmc2026.ligeti-zentrum.de/event/lunch-concert-3a/
LOCATION:Hamburg University of Technology\, Building I\, Audimax 2\, Denickestraße 22\, Hamburg\, 21073\, Germany
CATEGORIES:13-05,Concert,Music
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260513T133000
DTEND;TZID=Europe/Amsterdam:20260513T153000
DTSTAMP:20260513T234817
CREATED:20260423T155552Z
LAST-MODIFIED:20260507T183840Z
UID:10000203-1778679000-1778686200@icmc2026.ligeti-zentrum.de
SUMMARY:INTREPID: Mittagskonzert #003 zur ICMC HAMBURG 2026
DESCRIPTION:Spirio & Die Seele der Algorithmen \nDieses Mittagskonzert bietet eine faszinierende Bühne für Steinway Spirio – das weltweit fortschrittlichste Selbstspielsystem für Flügel. In dieser Session wird der Flügel weit über seine traditionelle Rolle hinausgeführt: Er agiert als autonomer Performer\, als Controller und sogar als Schnittstelle für menschliche Hirnströme.  \n  \nProgramm \nElevator Pitch\nJuan Vassallo\nCello: Antonio Lo Curto (Ensemble 404) \nChant\nYoonjae Choi\nCello: Antonio Lo Curto (Ensemble 404) \nMulholland Revisited \nHeloise Garry \n“Empathic Machines” for One Pianist’s Mind and Disklavier™\nMasatsune Yoshio and Atsushi Mori\nKlavier: Atsushi Mori \nVoici que la saison décline\nMikako Mizuno\nKlarinette: Anyu Lyu (Ensemble 404) \nLa Nuit Bleue\nZhixin Xu and Yunze Mu \n  \nTickets\nTickets (regulär 24 € / ermäßigt 15 €) via Pretix \n  \nICMC HAMBURG 2026\nDie International Computer Music Conference (ICMC) ist die weltweit bedeutendste Plattform für computergestützte Musik. Seit 1975 bringt sie Künstler:innen\, Wissenschaftler:innen und Entwickler:innen aus aller Welt zusammen. Sie widmet sich der Präsentation und Diskussion neuester Entwicklungen in Musiktechnologie\, Künstlicher Intelligenz\, interaktiven Systemen\, immersiven Audioformaten sowie deren gesellschaftlicher Bedeutung. ICMC HAMBURG 2026 widmet sich dem Motto „Innovation\, Translation\, Participation“ und wird von der HfMT Hamburg\, TUHH\, HAW Hamburg und dem UKE und in enger Zusammenarbeit mit dem ligeti zentrum organisiert.  \nDas INTREPID Festival begleitet die ICMC HAMBURG 2026 als öffentlich zugängliches Musikfestival. Es wurde ins Leben gerufen\, um wegweisende künstlerische Projekte einem breiten Publikum in Hamburg-Harburg näherzubringen. 
URL:https://icmc2026.ligeti-zentrum.de/event/intrepid-mittagskonzert-003-zur-icmc-hamburg-2026/
LOCATION:Technische Universität Hamburg\, Gebäude I\, Audimax 2\, Denickestraße 22\, Hamburg\, 21073\, Germany
CATEGORIES:INTREPID
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260513T153000
DTEND;TZID=Europe/Amsterdam:20260513T180000
DTSTAMP:20260513T234817
CREATED:20260421T114359Z
LAST-MODIFIED:20260423T174349Z
UID:10000162-1778686200-1778695200@icmc2026.ligeti-zentrum.de
SUMMARY:Workshop | Dan Wilcox: Introduction to Zirkonium3: ZKM's Sound Spatialization Environment
DESCRIPTION:Zirkonium is a free spatialized sound environment for the ZKM | Hertzlab\, formerly ZKM | Institute for Music and Acoustics\, which wraps various spatialization algorithms and abstracted speaker layouts with a path sequencing interface designed for composers in mind. The project was developed for the Sound Dome (Klangdom)\, a 43.4 speaker half dome in the ZKM Kubus studio however is applicable to almost any physical setup. Zirkonium has had three major versions since 2006 and this workshop introduces the current version\, Zirkonium3\, which utilizes libpd and Pure Data patches for its sound engine. The background and basic concepts will be introduced with the goal being for participants to stream live audio from Pd/Max/Ableton/etc projects into Zirkonium3 with live control. Requirement: MacOS 10.13+ \n  \nRequirements\nParticipants should come with an Apple laptop running macOS 10.13+ and headphones.\nParticipants should have a basic computer music background and ideally with example audio of their own work to try. An understanding of Max / Pure Data is helpful for trying the example OSC external control patches. \n  \nWorkshop registration\nPlease register via Pretix in order to participate in the workshop. There are no additional costs.  \n  \nAbout the workshop facilitator\nDan Wilcox is an artist\, engineer\, musician\, performer who combines live musical performance techniques with experimental electronics and software for the Exploration of new expression\, often through themes of science fiction\, space travel\, cyborgification\, and far futurism. His father was an aerospace engineer\, he grew up in the Rocket City\, and has performed in Europe and around the US with his one-man band cyborg performance project\,\nrobotcowboy.\nDan currently lives in Karlsruhe\, Germany and is a parttime artist & researcher for the ZKM | Hertzlab. He has been the developer for the Zirkonium project since 2017.\nMore about Dan here. \n 
URL:https://icmc2026.ligeti-zentrum.de/event/workshop-dan-wilcox-introduction-zirkonium3-zkm-spatialization/
LOCATION:ligeti center\, Production Lab (10th floor)\, Veritaskai 1\, Hamburg\, 21079\, Germany
CATEGORIES:13-05,Workshop
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260513T160000
DTEND;TZID=Europe/Amsterdam:20260513T180000
DTSTAMP:20260513T234817
CREATED:20260421T130933Z
LAST-MODIFIED:20260507T093524Z
UID:10000171-1778688000-1778695200@icmc2026.ligeti-zentrum.de
SUMMARY:Special Panel: Clarence Barlow
DESCRIPTION:Panelists\nFabian Czolbe \nBernd Härpfer \nJohn Chowning \nAnne Wellmer \nModeration: Georg Hajdu \n  \nAbout the panelists & their perspectives\nFabian Czolbe\, Julian Rohrhuber and Bernd Härpfer : “Amplifying Participation. The digital Barlow Archive (dBA) as an Approach to the Recording of a Digital Computer Music Legacy”\nThe archiving of computer music presents specific challenges that arise from the process-oriented\, software-based\, and technologically contingent nature of digital compositional practices. Digital artifacts such as source code\, algorithmically generated data\, and custom compositional tools encode not only musical outcomes but also procedural knowledge that is often implicit and difficult to formalize. This paper presents the Digital Barlow Archive (dBA)\, which may be taken as a case study for addressing these challenges through a translational and participatory approach to archiving computer music.\nThe born-digital legacy of Clarence Barlow (1945–2023) is open-ended and comprises heterogeneous materials. For doing justice to this openness and diversity\, the dBA adheres to existing archival standards while extending them to account for computer-music-specific objects and workflows. Thereby\, an object/event framework is employed to translate non-linear and iterative compositional processes into structured metadata representations that remain interoperable with institutional and international archival infrastructures. At the same time\, the framework acknowledges the limits of formalization and preserves interpretative openness.\nExtending the old idea of computing as an amplification of the intellect\, this paper argues that such archival methods do not only passively conserve material\, but need to translate and amplify the possibility of participation: they actively shape access\, interpretation\, and creative reuse of digital musical materials. Archiving should be conducted as an epistemic practice that mediates between technological history\, compositional knowledge\, and the contemporary computer music community.  \n  \nBernd Härpfer: “From pioneer to role model – a tribute to Clarence Barlow’s legacy to computer music and the ICMC” (invited)\nFor over five decades\, Clarence Barlow (1945-2023) has made significant contributions to contemporary music and\, in particular\, to computer music. He is recognised worldwide as a composer\, interdisciplinary researcher\, author\, software developer and professor. Another defining characteristic was his talent for bringing people together\, networking the scene and demonstrating great organisational stamina. A key milestone in this regard was the organisation and hosting of the 14th ICMC – the first time the event was held in Germany – in Cologne in 1988.  \n   \nAnne Wellmer: “On the Poetry of Indigestibility ξ”\n\nClarence Barlow was teaching at Sonology in the mid-nineties. A microtonal organ almost completely filled the room (BEA7) where he was teaching his course On Musiquantics. Clarence was a story teller. He would come up with hilarious and inspiring solutions for problems that seemingly could not be overcome… \nAbout the panelist\nanne wellmer | nonlinear is a composer performer based in The Hague. During her vocal studies at the Conservatory in Amsterdam in the early 1990s she discovered electronic music through workshops by Trevor Wishart and Joel Ryan and was introduced to the analog studio where noone except two composition students and her were working at the time. She decided to leave Schubert behind and moved on to study Sonology at the Royal Conservatory in The Hague. This is where she met Clarence Barlow. For a while STEIM became her second home. Shortly before September 11 she moved to Connecticut to study composition with Alvin Lucier. Back in the Netherlands\, she worked on the disclore of Dick Raaijmakers’ archive\, and updated the Sonology database so it could be included in the EMDoku (the International Documentation of Electroacoustic Music). Her work includes music theater pieces\, sound walks\, radio art\, fixed media and live performance. Since 2017 she has been teaching courses on experimental music within Art and Media at the Berlin University of the Arts.\nanne wellmer is a member of the society for nontrivial pursuits in Berlin and founding member of the nomadic collective new emergences. Recent collaborations include “the annes” with anne la berge\, “the octopussies” with Kristin Norderval and “triple A” with Alberto de Campo and Ariane Jeßulat.\nMore about anne welmer | nonlinear here: www.nonlinear.demon.nl \n  \nRaphael Radna: “Tombeau de Barleau: An Interactive Ludic–Algorithmic Composition in Honor of Clarence Barlow”\nTombeau de Barleau is an interactive\, generative\, and audiovisual composition dedicated to the pioneering computer-music composer Clarence Barlow (1945–2023)\, a teacher of the author. In this work\, two performers play a Pong-style video game in which collisions between the ball and a portrait of Barlow play a MIDI-controlled piano. The performers affect this process only indirectly\, as the gameplay itself governs musical parameters including harmony\, density\, rhythm\, dynamics\, and tempo. As a result\, the work balances novelty and determinism: while its musical surface varies across performances\, its underlying algorithmic structure provides a stable form. \nTombeau de Barleau adopts several elements of Barlow’s compositional style\, including rigorously formalized algorithmic processes\, unconventional uses of piano automata\, translations between visual and musical domains\, and playful or outlandish premises. It also applies some of his theoretical contributions\, namely his methods for quantifying the consonance of harmonic intervals (harmonicity) and priority of metrical pulses (indispensability). This paper describes the design and implementation of Tombeau de Barleau and reflects on its function as an homage to one of algorithmic music’s most inventive and influential figures.  \n  \nJohn Chowning: “Algorithmic compositions at Bell Telephone Laboratories in the 1960s” (invited)\nIn the domain of computer music\, the first algorithmic compositions were at Bell Telephone Laboratories (BTL) in the 1960s.  Max Mathews and colleagues\, encouraged and joined by John R. Pierce\, Director of Research\, experimented with Mathews and Joan Miller’s Music III and IV programs\, with notable results. While Mathews and Pierce did not claim to be composers\, they had musical instincts\, and the ideas in their algorithmic compositions were brilliant\, though often cartoonish sounding. \nIn my presentation\, I will present and explain a selection of works by composers including Mathews\, Pierce\, James Tenney\, and Jean-Claude Risset. \nAbout the panelist\nJohn Chowning was born in Salem\, New Jersey in 1934\, spending his school years in Wilmington\, Delaware. Following military service and four years at Wittenberg University in Ohio\, he studied composition in Paris with Nadia Boulanger. He received the doctorate in composition (DMA) from Stanford University in 1966\, where he studied with Leland Smith. \nIn 1964\, with the help of Max Mathews of Bell Telephone Laboratories and David Poole of Stanford University\, he set up a computer music program using the computer system of Stanford’s Artificial Intelligence Laboratory. Beginning the same year he began the research that led to the first generalized surround sound localization algorithm. In trying to comprehend the distance cue\, Chowning discovered the frequency modulation synthesis (FM) algorithm in 1967. This breakthrough in the synthesis of timbres allowed a very simple yet elegant way of creating and controlling time-varying spectra. Inspired by the perceptual research of Jean-Claude Risset\, he worked toward turning this discovery into a system of musical importance\, using it extensively in his compositions. In 1973 Stanford University licensed the FM synthesis patent to Yamaha in Japan\, leading to the most successful synthesis engine in the history of electronic musical instruments. Interview about FM synthesis Jun 17\, 2015\, Barcelona\, https://rwm.macba.cat/en/sonia/sonia-212-john-chowning \nHe taught computer-sound synthesis and composition at Stanford University’s Department of Music. In 1974\, with John Grey\, James (Andy) Moorer\, Loren Rush and Leland Smith\, he founded the Center for Computer Research in Music and Acoustics (CCRMA)\, which remains one of the leading centers for computer music and related research. Although he retired in 1996\, he has remained in contact with CCRMA activities. In 2019\, he initiated with an international team\, a long-term project to recreate\, by computer modeling\, the acoustics of the Chauvet Cave in France as they were when the exqusite 36\,000-32\,000-year-old wall paintings were created. \nChowning was elected to the American Academy of Arts and Sciences in 1988 and awarded the Honorary Doctor of Music by Wittenberg University in 1990. The French Ministre de la Culture awarded him the Diplôme d’Officier dans l’Ordre des Arts et Lettres in 1995. He was given the Doctorat Honoris Causa in 2002 by the Université de la Méditerranée\, by Queen’s University in 2010\, Hamburg University of Music and Drama in 2016 and Laureate of the Giga-Hertz-Award in 2013. \n 
URL:https://icmc2026.ligeti-zentrum.de/event/special-panel-clarence-barlow/
LOCATION:Hamburg University of Technology\, Building H\, Ditze Hörsaal (H 0.16)\, Am Schwarzenberg-Campus 5\, Hamburg\, 21073\, Germany
CATEGORIES:13-05,Panel
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260513T160000
DTEND;TZID=Europe/Amsterdam:20260513T180000
DTSTAMP:20260513T234817
CREATED:20260421T172024Z
LAST-MODIFIED:20260511T160839Z
UID:10000086-1778688000-1778695200@icmc2026.ligeti-zentrum.de
SUMMARY:Piece & Paper Session
DESCRIPTION:Music Program Overview\nHe（龢）\nXiangbin Lin \nSpores: A Physarum-Inspired Instrument for Agent-Based Ecological Interaction\nKyle Smith \nRemnants \nNikos Baskozos \n  \nSession Chair: Rodrigo Cadiz\nPaper Abstracts\nXiangbin Lin\, Du Huang\, Qi Qian and Maosong Sun Beyond Musique Concrète: “Perceptual Morphing via Audio Latent Embeddings Manipulation”\nThis paper proposes “Neural Musique Concr`ete”\, a compositional paradigm that reinterprets the Quantized Audio Latent Embedding produced by Neural Audio Codecs (NACs) as a malleable digital “Sound Object” (L’Objet Sonore) amenable to direct artistic intervention. While end-to-end generative AI has dramatically accelerated music production\, it confines creators to prompt-level interaction\, effectively reversing the long-standing trend toward increasingly fine-grained control over acoustic micro-structures. To restore this creative agency\, we introduce Latent Manipulation Functions (LMFs)—weighted summation with independent time-varying coefficients—that operate directly on the continuous latent space\, enabling “Perceptual Morphing”: the deep semantic and\nacoustic fusion of heterogeneous sound materials beyond waveform-domain superposition. The framework is validated through the electronic composition “He” (Harmony)\, whose three compositional phases (spectral fusion\, stochastic granular scattering\, and order-chaos coupling) demonstrate that tensor-based latent editing supports structurally complex musical forms; a complementary survey of five NACs further establishes the requirements for codec applicability. Our results indicate that direct manipulation of NAC latent embeddings effectively\nbridges the high fidelity of modern AI systems with the fine-grained compositional control central to avant-garde electronic music. \n  \nKyle Smith and Alexandria Smith Spores: “A Physarum-Inspired Instrument for Agent-Based Ecological Interaction”\nThe ecosystem is the interface. Spores (2025) is a touchscreen instrument where sound is activated based on simulated slime mold colonies finding food. The performer becomes part of the ecosystem and a caretaker alongside the slime mold. They read health\, growth rate\, stress\, and territorial spread through observable organism behavior\, the way one reads an animal’s body language. The per- former distributes resources rather than issuing commands\, acting as caretaker to a system that resists mastery. In this paper\, I discuss the theoretical underpinnings engaging with ecological systems\, biomimetic systems\, multi-agent systems in composition/instrument design\, technical implementation\, and composing for and performing with living environments where the performer becomes the care taker of an environment instead of the ”commander” of the environment. I discuss incorporating biophilic design practices into working with agent-based models (ABM) and\nartificial intelligence in music\, modes of interaction\, and my biologically inspired process of collaborating with artificial intelligence. A seven-minute event-based improvisation demonstrates this approach across six sections exploring hunger\, competition\, and abundance. \n  \nNikos Baskozos and Thanos Polymeneas-Liontiris: “Data-driven algorithmic composition with large sample libraries: a modular system for the dynamic formation and control of spatialised sound groups”\nThis paper presents a Max/MSP abstraction library for data-driven algorithmic composition. It utilises large sample collections for the formation and temporal control of multiple subselections of the corpus. Each subselection is hosted in a separate container object and may be formed with its own querying rules\, using various querying modes. These subselections are typically predefined and can be dynamically recalled and modified. Each subselection container is connected to its own pitch\, timing\, effects and playback modules. In addition to sequential playback of samples\, which is more suitable for melodic and rhythmic explorations\, the presented system offers the possibility of vertical playback. The vertical playback module provides temporal control of individual voices and is more suitable for harmonies and spectral techniques. Granular synthesis is possible with both playback modules. The system has been used for the composition of a few fixed-media works. The piece ‘Remnants’\, described in this paper\, partially explores the system’s available features. \n  \nAbout the pieces & artists\nXiangbin Lin: He（龢）\nThe electronic music composition “He” (龢) explores deep fusion mechanisms for heterogeneous sound materials through the direct manipulation of Audio Latent Embeddings (ALE). “He” (龢) is an ancient Chinese character. The conceptual framework derives from the etymology of the title character\, where “Yue” (龠) symbolizes artificially constructed musical structures\, while “He” (禾) signifies nature and vitality . Situated at the intersection of artificial operation and sonic origins\, this work implements a novel method of sound fusion through the computational manipulation of the latent space. \nAbout the artist\nXiangbin Lin is a master’s student in Electronic Music Composition at the Central Conservatory of Music. He received his bachelor’s degree in Electronic Music Production from the same institution\, where he ranked first in his cohort and was recommended for direct admission to the master’s program. He studies under Professor Qi Qian\, Associate Director of the Department of Music Artificial Intelligence and Music Information Technology at the Central Conservatory of Music. \nHe has received numerous honors\, including the National Scholarship for Undergraduate Students\, the Outstanding Graduate of Beijing award\, the Beijing Advanced Class Collective Award\, and the First-Class Academic Scholarship for Graduate Students. He has also been a multiple-time recipient of the Outstanding Student Scholarship and the Merit Student title of the Central Conservatory of Music. \n  \nKyle Smith: Spores: A Physarum-Inspired Instrument for Agent-Based Ecological Interaction\nSpores (2025) explores the relationship between non-deterministic emergence\, artificial co-agency\, and musical expression by reimagining the controller as an ecological system. The performer tends virtual slime mold colonies on a touchscreen running a real-time Physarum polycephalum agent-based model\, seeding new colonies\, distributing nutrients\, and disturbing the environment. Health\, stress\, and territorial spread are continuously extracted and mapped to synthesis parameters via OSC and MPE MIDI. No food\, no colonies. No colonies\, no sound. The constraint becomes visible before it becomes audible. Six sections explore hunger\, competition\, and abundance across a seven-minute event-based improvisation\, each isolating a different ecological condition. The ecosystem itself becomes the interface. \nAbout the artist\nKyle Smith (b. 2000) is a designer\, engineer\, and multimodal artist working at the intersection of music technology\, biomimetic design\, and immersive systems. His research focuses on sensor-driven soundscapes\, generative instruments\, and ecological approaches to musical interaction. He is a second-year master’s student in the Creative Music Technology Lab (CMTL) at Georgia Institute of Technology and holds a B.S. in Creative Technology & Design from the University of Colorado Boulder. \n  \nNikos Baskozos: Remnants\nThis fixed-media piece explores rhythmic patterns and creative sample browsing using a large sound collection. As detailed in the accompanying ICMC 2026 paper\, “Data-driven algorithmic composition with large sample libraries: a modular system for the dynamic formation and control of spatialised sound groups”\, the piece is realised with a custom system which utilises Music Information Retrieval for offline analysis and various querying modes for real-time navigation. For this piece\, a corpus of about 20\,000 one-shot samples is used\, consisting of commercial libraries\, personal recordings\, and random sounds stored in random folders. Four subselections of sounds from the larger library are used and they are dynamically modified during the piece. Each group focuses on different frequency bands and structural roles in the music. Frame drum sounds comprise the low end\, while mostly metallic sounds are present in the mid-range and high frequencies. From a reflective perspective\, images of scrapyards emerge\, both through the sound palette and as an analogy for the retrieval and recombination of materials. The sounds\, as found in the library\, are unsorted and decontextualised\, with folk instruments coexisting alongside office foley sounds. Selection based on audio characteristics allows samples to be found and placed in a musical context. Samples are triggered continuously at 130 BPM\, and rhythmic variations are generated through constrained random selection from the contents of each group. ‘Browsing solos’ integrated into the rhythm are heard frequently. These are created through continuous descriptor querying\, allowing smooth transitions in sample selection. \nAbout the artist\nNikos Baskozos is from Athens\, Greece. He holds a diploma in architecture (U. Patras) and a master’s degree in music creation for new media (NKUA). In recent years\, his main focus has been computer music\, particularly working with large sound collections in Max/MSP. He recently completed an internship at IRCAM-STMS Lab\, focusing on corpus-based synthesis and spatialisation. \n 
URL:https://icmc2026.ligeti-zentrum.de/event/piece-paper-session-hamburg/
LOCATION:Hamburg University of Technology\, Building H\, Audimax 1\, Am Schwarzenberg-Campus 5\, Hamburg\, 21073\, Germany
CATEGORIES:13-05,Piece & Paper,Session
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260513T160000
DTEND;TZID=Europe/Amsterdam:20260513T210000
DTSTAMP:20260513T234817
CREATED:20260421T093204Z
LAST-MODIFIED:20260511T102710Z
UID:10000244-1778688000-1778706000@icmc2026.ligeti-zentrum.de
SUMMARY:[Off-ICMC] Installation | Adriano C. Monteiro & Rafaela B. Pires: "DE/RE:GENERATION"
DESCRIPTION:De/Re:Generation stems from a speculative question: would cicadas sense acoustic information during the up to 17 years they live underground\, before emerging from the soil for a brief adult phase marked by intense acoustic display? From this perspective\, the installation approaches sound not only as an auditory phenomenon\, but as something sensed through the body\, making vibration and tactile perception central to the experience.\nAt the core of the work are rounded\, shell-like sculptures molded from biodegradable cassava-starch bioplastics. These forms visually echo cicada nymphs and exuviae: fragile\, hollow exoskeletons that signal absence\, transfor-mation\, and continuation. Like the remnants left after metamorphosis that nourish other species\, the installation’s ma-terials participate in an ongoing process of regeneration: they deform over time\, respond to humidity and dryness\, and become alternately more rigid or more flexible\, like a living skin in dialogue with the environment. Integrated as touch interfaces\, the bioplastic sculptures function as tactile sensing surfaces that mediate the interaction with the sound en-vironment formed by vibrating surfaces and low-frequency sound fields\, that allude to the cicada’s aboveground and underground sonic worlds\, blurring boundaries between tactile and auditory modes of perception\, organic material and inorganic technological systems. \nAbout the artists\nAdriano Monteiro is a music composer and researcher. His work focus on the convergence of art\, science and technology for creative processes\, performance and analysis of music. He is the author of eletroacustic and intermedia works in different media and formats\, such as acousmatic\, live electronics\, audiovisual performances and installations\, network and telematic music\, and also author and coauthor of several articles concerning creative processes in music and musical analysis. Adriano Monteiro is an associate professor of Music Composition at the School of Music and Scenic Arts of Federal University of Goiás (EMAC/UFG). He studied music composition at the University of Campinas (UNICAMP) and holds a PhD in music from the same institution. \nRafaela Blanch Pires is a designer and professor at the Scenic Arts department at the Federal University of Goiás (Brazil). Her background is in fashion design\, MA in “Fashion and Textiles” and PhD in “Design and Architecture” (São Paulo University). Between 2015 and 2016 she worked as a doctoral visiting student at the “Wearable Senses Lab” at the Technical University of Eindhoven (Holland). She experiments with the areas of bio-materials\, digital fabrication\, special effects make-up\, costume design and electronics. \n 
URL:https://icmc2026.ligeti-zentrum.de/event/installation-adriano-c-monteiro-rafaela-b-pires-de-regeneration-3/
LOCATION:Stellwerk Hamburg (Lounge)\, Hannoversche Str. 85\, Hamburg\, 21079\, Germany
CATEGORIES:13-05,Installation
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260513T173000
DTEND;TZID=Europe/Amsterdam:20260513T191500
DTSTAMP:20260513T234817
CREATED:20260415T121612Z
LAST-MODIFIED:20260421T200357Z
UID:10000120-1778693400-1778699700@icmc2026.ligeti-zentrum.de
SUMMARY:[Off-ICMC] Installation & Performance | Andrea Mancianti & Tom De Cock: "Autophagy III"
DESCRIPTION:Photo: Andrea Mancianti\n  \nAutophagy III is a participatory installation that visitors can walk through and interact with: the interplay of small percussion instruments\, sixteen suspended sound sources\, and an interactive lighting system creates an immersive soundscape.  \nNo registration required \n  \nThe Off-ICMC\nMusic is what brings us together\, even when everything else pulls us apart.\nMusic everywhere—it is part of our everyday lives. And yet\, we’re hearing it performed live on analog instruments less and less. Instead\, it often reaches us through speakers or headphones\, as files\, from the cloud. What does music mean to you? What does it sound like today? Where does it begin—and where does it end?\nThe ligeti center invites you to listen more closely and discover new sounds—to explore\, experiment\, and play. This year\, ICMC HAMBURG 2026 revives an old tradition: the Off-ICMC\, a free and accompanying festival curated for the general public and anyone curious about computer music. \nAll Off-ICMC events are free of charge.  \n\n  \n \n  \n  \n 
URL:https://icmc2026.ligeti-zentrum.de/event/off-icmc-installation-performance-andrea-mancianti-tom-de-cock-autophagy-iii/
LOCATION:Stellwerk Hamburg\, Hannoversche Straße 85\, Hamburg\, 21079\, Germany
CATEGORIES:13-05,Installation,Off-ICMC,Performance
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260513T183000
DTEND;TZID=Europe/Amsterdam:20260513T210000
DTSTAMP:20260513T234817
CREATED:20260421T195653Z
LAST-MODIFIED:20260422T082616Z
UID:10000087-1778697000-1778706000@icmc2026.ligeti-zentrum.de
SUMMARY:Banquet
DESCRIPTION:Photo: Richard Stoehr\n  \nOn Wednesday\, May 13\, 2026\, the ICMC HAMBURG 2026 Banquet will take place at the exceptional Speicher am Kaufhauskanal – one of Harburg’s most atmospheric historic venues. This beautifully restored 19th-century half-timbered building\, originally built in 1827\, blends architectural charm with state-of-the-art event and culinary infrastructure\, creating a truly memorable setting. \nGuests can look forward to an elegant evening in a unique riverside location in Hamburg-Harburg\, where historic character meets contemporary comfort. Following the banquet\, a club concert will round off the night – open to all conference participants and perfect for continuing the conversations and connections in a more relaxed\, musical atmosphere. \nPlease note that availability is limited to 100 banquet tickets. \n\n\n\n\n\n\n\n\n\n\n\n\n\n\nBanquet tickets: 85 € \n\n\n\n\n\n\n\n\n\n\n\n\nRegistration für thr ICMC Hamburg 2026 Banquet via Converia  \n\n\n  \n\n\nPhoto: Jasmin-Marla-Dichant
URL:https://icmc2026.ligeti-zentrum.de/event/banquet/
LOCATION:Speicher am Kaufhauskanal\, Blohmstraße 22\, Hamburg\, 21079\, Germany
CATEGORIES:13-05,Special Event
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260513T193000
DTEND;TZID=Europe/Amsterdam:20260513T210000
DTSTAMP:20260513T234817
CREATED:20260415T121938Z
LAST-MODIFIED:20260421T201129Z
UID:10000121-1778700600-1778706000@icmc2026.ligeti-zentrum.de
SUMMARY:[Off-ICMC] Concert | Florentin Ginot: "Disturbance"
DESCRIPTION:Photo: Florentin Ginot\n  \n“Disturbance” is an audiovisual solo performance that blends elements of concert\, video art\, and theater. With his double bass and analog synthesizers\, Florentin Ginot invites the audience on a live nocturnal journey. Past and present collide with ghostly glitches and pulsating electronic rhythms.  \nregistration required here \n  \nThe Off-ICMC\nMusic is what brings us together\, even when everything else pulls us apart.\nMusic everywhere—it is part of our everyday lives. And yet\, we’re hearing it performed live on analog instruments less and less. Instead\, it often reaches us through speakers or headphones\, as files\, from the cloud. What does music mean to you? What does it sound like today? Where does it begin—and where does it end?\nThe ligeti center invites you to listen more closely and discover new sounds—to explore\, experiment\, and play. This year\, ICMC HAMBURG 2026 revives an old tradition: the Off-ICMC\, a free and accompanying festival curated for the general public and anyone curious about computer music. \nAll Off-ICMC events are free of charge.  \n\n  \n \n 
URL:https://icmc2026.ligeti-zentrum.de/event/off-icmc-concert-florentin-ginot-disturbance/
LOCATION:Stellwerk Hamburg\, Hannoversche Straße 85\, Hamburg\, 21079\, Germany
CATEGORIES:13-05,Concert,Music,Off-ICMC
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260513T213000
DTEND;TZID=Europe/Amsterdam:20260513T233000
DTSTAMP:20260513T234817
CREATED:20260421T162148Z
LAST-MODIFIED:20260513T090552Z
UID:10000088-1778707800-1778715000@icmc2026.ligeti-zentrum.de
SUMMARY:Club Concert 3C
DESCRIPTION:Club Concert 3C is an exploration of the boundaries of collective improvisation and creative technology. The SPIIC Ensemble of the HfMT Hamburg presents a program in which the audience has a say\, algorithms extend historical works\, and artificial intelligence reinterprets human movement as a “hallucination.”\nIn the industrial atmosphere of the Speicher am Kaufhauskanal\, acoustic instruments merge with live coding\, neural synthesis\, and interactive notation. \nThis Club Concert is open to the public. Admission is free; registration is not required. \n  \nProgram Overview\nLiquid tensioning\nFernando Egido \nSinophony for Clarence\nJuan Arturo Parra Cancino \nChimerique\nJonathan Wilson \nNEBULA\nEnrique Tomás and Moisés Horta Valenzuela \nplastique\nSe-Lien Chuang and Andreas Weixler \nShamanic Protocol\nOscar Corpo \nA Walk in Polygon Field\nRob Canning \nDEPRECATED\nDenis Polec Vocal \n  \nMusicians\nMyrsini Bekakou (GR) – violin \nCarmen Kleykens Vidal (ES) – cello \nThembinkosi Mavimbela (ZA) – double bass \nSebastian Sarre (USA) – trumpet \nJeanne Lavalle (FR) – bassoo \nMoritz Christiansen (DE) – tenor saxophone \nVlatko Kučan (DE) – clarinets \n  \nAbout the pieces & artists\nFernando Egido: Liquid tensioning\nLiquid Tensioning is a work for violin and double clarinet\, live notation\, live generative system\, live electronics\, and attendees’ participation (category: Improvised work for ensemble and electronics (SPIIC+ Ensemble)). Liquid tensioning is a Collaborative and interactive work in which the work is real time created by the self-evaluation of the work. The attendees will evaluate the work via a web app\, and the musical generative system will change according to the evaluation in real time. The Musicians will receive notes via a live notation system on their mobile phones. The title of the works refers to the model of tensioning provided by the generative system based on a musical tensioning that is not related to the properties of the musical material. This work belongs to a series of works in which the composer creates a self-referential musical generative system based on the real-time evaluation of the work. The main musical material of this work is its evaluation. The work duration is about 10 minutes. \nAbout the artist\nHe studied composition with José Luis de Delás at the School of Music of the University of Alcalá de Henares and received musical training in workshops with composers\, analysts\, and interpreters around the LIEM or the GCAC. He studied Computer Music with Emiliano del Cerro.\nHe has published several papers at international conferences.\nHis works have been performed at festivals such as ICMC 2025-2024-2023\, Bled international festival\, SMC Conference Graz\, Convergence Festival\, Ars electronica Linz\, Atemporánea Festival\, AIMC 2022 conference\, EVO 2021\, OUA Electroacoustic Music Festival 2020\, ISMIR 2020 in Montreal. The Seoul International Electroacoustic Music Festival 2019\, the ACMC 2019 conference in Melbourne\, SID 2015 conference in New York\, Venice Vending Machine III\, the New York City Electroacoustic Music Festival\, JIEN in the Auditory 400\, La hora acúsmatica\, SMASH Festival\, Encontres Festival in Palma of Majorca\, and ACA. \n  \nJuan Arturo Parra Cancino: Sinophony for Clarence\nSynophonie for Clarence is an ensemble and live electronics work inspired by the formal and sonic principles of Clarence Barlow’s Sinophony I (1970)\, his first electronic composition. Rather than functioning as an arrangement or transcription\, this piece operates as an instrumental extension of Barlow’s electronic sound world\, translating and reactivating its core materials through acoustic performance and real-time electronic processes. \nThe work seeks to bring into the physical space of performance elements that\, in Sinophony I\, exist only in fixed media: continuous tones\, slow harmonic transformations\, beating frequencies\, and the perceptual tension between purity and instability. These characteristics are reimagined here as a living\, performative situation\, where instrumental sound and electronics merge into a single\, evolving spectral body. \nSynophonie for Clarence builds on methods developed by Juan Parra Cancino to extract performative salients from early electronic works—elements that can be embodied\, negotiated\, and reshaped by performers in real time. Through this approach\, the piece revisits historical electronic material not as an object to be preserved unchanged\, but as a dynamic field for exploration\, experimentation\, and renewed artistic engagement. The aim is not reconstruction\, but continuation: to recover underlying processes and extend their implications into contemporary performance practice. \nBy situating acoustic instruments\, live electronics\, and spatialized sound within a shared listening ecology\, the work foregrounds collective tuning\, timbral fusion\, and emergent beating phenomena as central musical forces. The ensemble functions less as a group of independent voices than as a composite oscillator\, shaped by subtle interactions and shared attention. \nThis piece is conceived as a tribute to Clarence Barlow—composer\, educator\, and friend—honoring both his pioneering contributions to electronic music and his enduring influence on ways of thinking about sound\, structure\, and musical intelligence. \nAbout the artist\nJuan Parra Cancino studied Composition at the Catholic University of Chile and Sonology at the Royal Conservatoire The Hague\, where he completed a Master’s degree in electronic music. He received a PhD from Leiden University in 2014 on performance practice in computer music. A guitarist trained in Robert Fripp’s Guitar Craft\, he has worked extensively in live electronics. He is a researcher at the Orpheus Institute and Regional Director for Europe of the International Computer Music Association (2022–26). \n  \nJonathan Wilson: Chimerique\n“Chimerique” is about the interaction of music and language. Written and premiered in 2017\, this composition is for an ensemble featuring improvisation\, narration\, and electronics. It was realized in a collaboration with poet and translator Patricia Hartland by incorporating her English translation of “Ravines of Early Morning” by Raphael Confiant into a musical setting. The title is taken from a word in this text. It is French for “chimerical\,” and it can be defined as 1: something that takes delight in illusions\, or 2: something that is utopian\, or unreal. The narrator forms associations with this word through various phrases and passages that relate to the part of the story in which the description of “chimerique” is elaborated. Throughout this performance\, the performers listen and react to the text spoken by the narrator (and electronics). They are accompanied by electronics that consist of fixed media and live electronics from two different patches in Max/MSP using additive synthesis and granular synthesis. The musical instruments are the source material for granular synthesis. The score for this composition uses hybrid musical notation with some traditional notation for pitch and some graphic notation that leads performers subsequently to interpret not only the spoken phrases\, but also the graphic notation in their parts to determine volume\, pitch\, rhythm\, articulation\, and contour\, thereby making improvisation a necessity. The narrator and performers work together to generate a spontaneously formed through-composed work that marries text and music. The form can be described as through-composed in six sections. In the first section the performers respond only to a single phrase. In sections 2-6 the performers respond not only to phrases that delineate each section but also respond to extended narration shifting from descriptions of dreams\, the night\, madness\, illusions\, and at the end the act of dreaming itself. \nAbout the artist\nDr. Jonathan Wilson’s works have been performed at the Ann Arbor Film Festival\, European Media Art Festival\, ICMC\, SICMF\, SEAMUS\, NYCEMF\, MUSELAB\, NSEME\, Napoleon Electronic Music Festival\, Iowa Music Teachers Association State Conference\, and Midwest Composers Symposium. He is the winner of the 2014 Iowa Music Teachers Association Composition Competition. Jonathan has studied composition with Lawrence Fritts\, Josh Levine\, David Gompper\, James Romig\, James Caldwell\, Paul Paccione\, and John Cooper. In addition\, studies in conducting have been taken under Richard Hughey and Mike Fansler. Jonathan is a member of Society of Composers\, Inc.\, SEAMUS\, ICMA\, and the Iowa Composers Forum. \n  \nEnrique Tomás and Moisés Horta Valenzuela: NEBULA\nArtists working with deep-learning audio models often find that exploring their high-dimensional latent spaces requires chance-based\, combinatorial\, or technically complex machine-learning techniques. While these approaches can reveal unexpected possibilities\, they also make it more difficult to deliberately guide the models toward outcomes that are musically meaningful or aligned with specific creative intentions. \nIn this improvisation for solo instrument and two performers on live electronics\, we present an alternative approach to create a more interpretable and musically guided latent space exploration. This approach leverages Principal Component Analysis (PCA) applied to pre-encoded RAVE (Realtime Audio Variational Autoencoder) representations to reorganize the latent data into clusters that can be navigated more deliberately in performance. PCA reorganizes the encoded data into clusters based on shared timbral characteristics\, producing data clouds directly connected to the sonic properties of the source material. By structuring access to the latent space in this way\, our method bridges the gap between open-ended exploration and purposeful control\, offering performers a clearer and more intuitive means of shaping sound. \nTo prepare the improvisation\, and prior to the concert\, the solo instrumentalist provides an eight-minute recording that defines the sonic domain of the performance. This recording is encoded and analyzed\, restricting exploration to regions of the latent space shaped by the performer’s own material and giving the electronic musicians a more focused and musically coherent landscape to navigate. During the live performance\, the solo instrumentalist and the two electronic performers interact within this PCA-organized timbral map. Their trajectories through the latent space—along with the evolving clusters and sonic transformations—are projected in real time\, allowing the audience to see how latent-space navigation corresponds to audible change. \nThe musical materials resulting from this setup combine structured instrumental improvisation with electronically generated textures derived from latent-space navigation. While the overall form is left to real-time decisions between the soloist and the live performers\, the resulting sound world often alternates between rhythmically driven motifs—loosely recalling the interactive dynamics of small jazz ensembles—and more abstract electronic layers shaped through PCA-guided trajectories. These electronic textures\, produced by traversing clustered regions of the latent space\, serve as harmonically and timbrally evolving fields against which the soloist can articulate phrasing\, gesture\, and dynamic contour. The custom-built performance interfaces allow the electronic performers to shape these materials with precision\, enabling a responsive interplay in which acoustic action and machine-learned transformations continually inform one another. \nAbout the artists\nEnrique Tomás (*1981) is a sound artist\, researcher and assistant professor at the Tangible Music Lab who dedicates his time to finding new ways of expression and play with sound\, art and technology. His work explores the intersection between sound art\, computer music\, locative media and human-machine interaction.\nAs an individual artist\, Tomás’ activity is centered around ultranoise.es and focuses on performances and installations with extreme and immersive sounds and environments. He has exhibited and performed in spaces of Ars Electronica\, Sonar\, CTM\, IRCAM\, IEM\, KUMU\, SMAK\, NOVARS\, STEIM\, Steirischer Herbst\, Alte Schmiede\, etc.\, and in galleries and institutions throughout Europe and Latin America. \nMoisés Horta Valenzuela is a self-taught sound artist\, technologist\, musician\, and researcher from Tijuana\, Mexico\, based in Berlin. His work spans computer music\, neural audio synthesis\, conversational AI\, and the politics of emerging technologies\, approached through a critical lens that connects ancestral knowledge with contemporary digital culture. He has presented work internationally at Ars Electronica\, NeurIPS ML for Creativity & Design\, MUTEK México\, MUTEK AI Art Lab Montréal\, Transart Festival\, CTM Festival\, Elektron Musik Studion\, and the Sound and Music Computing Conference\, among others. \n  \nSe-Lien Chuang and Andreas Weixler: plastique\ninteractive audiovisual comprovisation for e-quitar\, green leaves & i-hands – GLISS – Green Leaves Imaginary Scenic Score\nDuration: ca. 8 min \nAbout the artists\nAndreas Weixler\, born 1963 in Graz\, Austria\, is a composer for computer music with an emphasis in\nintermedia realtime processing. He is teaching at the mdw Vienna\, InterfaceCulture in Linz and serves associate university professor at the CMS – computer music studio of Anton Bruckner\nUniversity in Linz where he initiated the intermedia concert hall the Sonic Lab.\nStudies of contemporary composition at KUG in Graz\, Austria with diploma by\nBeat Furrer\, completed by international projects and residencies. \nSe-Lien Chuang is a composer born in Taiwan in 1965 and based in Austria since 1991. Her work focuses on contemporary instrumental composition and improvisation\, computer music\, and audiovisual interactivity. She has presented works and lectures internationally in Europe\, Asia\, and the Americas at events such as ICMC\, ISEA\, and NIME. From 2016 to 2019\, she taught for the Computer Music Studio at Bruckner University Linz. Since 1996\, she has co-run Atelier Avant Austria\, specializing in audiovisual interactive systems\, real-time processing and computer music. \n  \nOscar Corpo: Shamanic Protocol\nShamanic Protocol is an online sound ritual performed by a partially damaged virtual entity. Its memory is an incomplete and corrupted archive\, composed of residual sonic materials related to shamanic rituals\, music therapy\, sound-based healing practices\, and data derived from musical epigenetics. Reshaped by the available data and the presence of connected users\, these fragments are reprocessed and reorganised each time the system is accessed\, generating a sonic ritual that follows a recognisable structure yet never manifests in the same way twice. The sound ritual has no declared purpose: it remains unclear whether the entity performs the rite as an attempt to repair itself\, an act of archive restoration\, a process meant to affect human listeners\, or simply because this process constitutes its way of operating. The variability of the outcome may suggest either a gradual recovery or a progressive deterioration of the system. The resulting sonic output exists in a space between therapeutic effect\, system malfunction\, and autonomous algorithmic process. The shifts between fragile calm\, overload\, interruption\, and recovery reveal the instability of the system that generates it. No clear boundary is drawn between healing\, malfunction\, or expression: these states coexist and remain indistinguishable within the process. The rite can be experienced as a purely electronic process\, or human performers\, in any instrumental or vocal configuration\, may take part in its enactment. Musicians are invited to participate in the ritual rather than interpret a fixed musical text. Guided by an open\, interpretative score\, performers do not execute predefined material but engage in the ritual itself\, interacting with the electronic layer by listening\, responding\, and aligning their gestures with the evolving sonic environment. The notation offers indications of behaviour\, density\, register\, and gesture rather than prescribed material; in this way\, performers take part in the rite by freely amplifying\, refracting\, and destabilising the entity’s activity. The score prescribes no precise instrumentation or techniques; in this instance\, the ritual is performed with a string ensemble alongside soprano saxophone\, bass clarinet\, piano\, and percussion. Performers do not guide the system\, nor do they follow it; instead\, they remain in a state of attentive coexistence with its unfolding behaviour. Each performance is therefore situated\, shaped by specific conditions\, configurations\, and presences.\nThe process does not call for interpretation: repair and damage are no longer separable; function and meaning no longer distinguishable. \nAbout the artist\nOscar Corpo (born 8 April 1997\, Naples\, Italy) is an Italian composer based in Hamburg. He studied Composition and Multimedia Composition in Naples\, and is now a PhD candidate at the HfMT Hamburg\, focusing on AI and collective improvisation with Ensemble 404. His work spans electronic\, instrumental\, vocal\, improvisation\, and music theatre. He has collaborated with Alexander Schubert\, Berliner Philharmoniker\, La Biennale di Venezia\, and Lux Nova Duo\, among others. \n  \nRob Canning: A Walk in Polygon Field\nA Walk in Polygon Field is a graphic score environment for controlled improvisation\, composed for 1–4 instrumentalists with electronics and surround diffusion. Three polygons—pentagon\, hexagon\, heptagon—rotate at different rates\, producing polymetric phase relationships (5-against-6-against-7). Performers activate objects orbiting these shapes\, interpreting compound visual motion as sonic material. An outer ring generates OSC data driving spatial processing.\nThe score defines states\, behaviours\, and constraints; performers negotiate what these structures sound like. Each polygon side represents a discrete performance state—pitch region\, articulation\, texture—but specific mappings remain open. Musicians enter and withdraw from a shared texture whose density and pacing emerge from collective decision-making.\nAuthored entirely in SVG\, the work embeds performance semantics directly into visual element identifiers\, executed by a browser-based runtime on networked tablets. This approach\, detailed in the accompanying paper “Scores That Run: Graphic Notation with Embedded Performance Semantics\,” demonstrates how open web standards support animated notation without specialised infrastructure. Each performance traces a different route—music negotiated through shared encounter with a moving score. \nFull Guide to Interpretation\, Programme Notes and supporting materials including Supercollider live electronics patch are available online: \nhttps://robcanning.github.io/oscilla/compositions/polygonfield2026/ \nAbout the artist\nRob Canning (Dublin\, 1974) is a composer\, improviser\, and creative technologist whose work explores animated notation\, improvisation\, and the dynamics of networked musical systems. He holds a PhD in composition from Goldsmiths\, University of London\, where his research examined distributed authorship in computer-assisted music. A long-time advocate of Free and Open Source Software\, he develops Oscilla\, an open-source platform for animated graphic notation and networked performance. \n  \nDenis Polec: DEPRECATED  \n“You are a retired composition professor\, an old white man who feels deeply unrecognized and overlooked. You cannot judge a performance objectively because you relate everything to your own career. You view the musician’s work through a lens of bitterness\, constantly comparing it to your supposedly superior\, ignored compositions.” \nDEPRECATED is an experimental setup where a computer observes a live performance and comments on it in real-time. Driven by a Multimodal Large Language Model\, the system processes visual data to generate a continuous stream of thought. The output shifts between human-like sentences and the raw numerical sequences used by the model’s architecture. This process breaks language down into its constituent tokens—the word fragments the machine uses to process and represent information. \nAbout the artist\nDenis Połeć is a Hamburg-based multimedia artist\, composer\, and creative programmer. His practice focuses on generative systems and the influence of digital interfaces on human behavior. He holds a Master’s degree in Multimedia Composition from the Hamburg University of Music and Drama (HfMT). \n  \nVolunteers\nTechnical Directors\nLeon Sudahl / Bastian Striepke \nAssistants\nHanna Habich\nDenis Polec\nWan Qian Lin\nMurphy Baginski\nPietro Frigato
URL:https://icmc2026.ligeti-zentrum.de/event/club-concert-3c/
LOCATION:Speicher am Kaufhauskanal\, Blohmstraße 22\, Hamburg\, 21079\, Germany
CATEGORIES:13-05,Club Concert,Music,Special Event
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260514T070000
DTEND;TZID=Europe/Amsterdam:20260514T200000
DTSTAMP:20260513T234817
CREATED:20260430T152154Z
LAST-MODIFIED:20260430T152617Z
UID:10000240-1778742000-1778788800@icmc2026.ligeti-zentrum.de
SUMMARY:Installation | Healing Soundscapes (invited)
DESCRIPTION:Healing Soundscapes are developed and implemented for waiting and working areas in the University Medical Center Hamburg-Eppendorf.   \nThe installation presented at ICMC HAMBURG 2026 was developed for the waiting area of the emergency department and is played there 24/7. It is intended to create a positive atmosphere in the waiting area\, thereby making the wait more pleasant for patients.  \nThe Healing Soundscapes project is part of the interdisciplinary ligeti center\, which is funded by the Federal Ministry of Research\, Technology and Space (BMFTR) and the City of Hamburg as part of the Federal-State Initiative Innovative University.  \n 
URL:https://icmc2026.ligeti-zentrum.de/event/installation-healing-soundscapes-invited/2026-05-14/
LOCATION:Hamburg University of Technology\, Building J\, Library (Rotunde)\, Denickestraße 22\, Hamburg\, 21073\, Germany
CATEGORIES:Installation
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260514T090000
DTEND;TZID=Europe/Amsterdam:20260514T103000
DTSTAMP:20260513T234817
CREATED:20260422T134949Z
LAST-MODIFIED:20260511T160941Z
UID:10000220-1778749200-1778754600@icmc2026.ligeti-zentrum.de
SUMMARY:Paper Session 7b: Interactive Media I
DESCRIPTION:Session Chair: Mara Helmuth\n  \nPaper abstracts\nAdriano C. Monteiro and Rafaela B. Pires: “Exploring DIY Cassava-Starch Bioplastic Interfaces with EFT-Based Touch Sensing in an Interactive Sound Installation”\nThis paper reports the design\, implementation\, and preliminary validation of a tangible interface that combines electrical field tomography (EFT)\, support vector machines (SVM)\, and cassava-starch bioplastics for an interactive sound installation. The system addresses two main challenges: 1) creating low-cost\, large-area touch surfaces with flexible geometries\, and 2) integrating bio-degradable materials into electronic interfaces while preserving sufficient electrical and mechanical stability for real-time performance. It consists of bioplastic interfaces\, custom hardware for multiplexed current injection and voltage measurement\, and a software pipeline for signal conditioning and SVM-based touch classification. Results show that the system can reliably distinguish a vocabulary of touch gestures on irregular bioplastic objects\, while also revealing limitations related to long-term stability\, calibration\, and object-specific training. Finally\, the paper discusses its integration into De/Re:Generation\, a sound installation where bioplastic sculptures operate both as scenographic elements and as interactive surfaces within a vibro-acoustic environment inspired by the cicada life cycle.\nGuanjun Qin\, Yunxuan Jia and Neal Farwell: “Reimagining Athletic Gesture: Transforming Basketball Sound into Narrative Electroacoustic Music”\nThis paper presents FMVP\, an electroacoustic fixed-media composition that transforms the sounds of basketball into a narrative of doubt\, struggle\, and redemption. Built entirely from field recordings captured on an indoor court\, the work reimagines sport as a metaphor for resilience and creative endurance. Through granular time-stretching\, spectral transformation\, dynamic filtering\, and spatial motion\, physical gestures such as dribbling\, sliding\, and impact are translated into evolving sonic textures and large-scale form. Inspired by the career arc of NBA player Stephen Curry\, the composition explores how kinetic rhythms can be reshaped into emotional trajectories\, aligning with the conference theme of Innovation\, Translation\, Participation. Methodologically\, the project sits within artistic research\, using composition as a mode of inquiry into the relationship between embodied action and sound narrative. The paper discusses the conceptual framework\, sound-design process\, and structural strategies underpinning FMVP\, arguing that everyday athletic environments offer rich material for electroacoustic storytelling and for rethinking how listeners participate in narratives constructed purely through sound.\nSitong Wu and Jinshuo Feng: “Gestalt: A Symbiotic Framework for Real-Time Collaboration  between Performers and Mass Audiences”\nThis paper presents Gestalt\, a real-time co-creative audiovisual performance system for professional performers and large-scale audiences. To address participation barriers\, interaction latency\, and unequal creative agency in Networked Music Performance (NMP)\, Gestalt adopts a browser-based heterogeneous architecture: performers retain structural control via MediaPipe-based motion capture\, while 50 –200 audience members participate through a mobile web multi-touch interface. Centered on a mechanism termed “Translation\,” the system performs a dual reconstruction. On the audio side\, an activity-weighted aggregation algorithm transforms large volumes of discrete gestures into coherent musical textures. On the visual side\, audience touch inputs are streamed in real time to a physics-driven WebGL particle stage\, translating collective crowd activity into ordered audiovisual forms. Technically\, the web frontend connects to a Max/MSP audio engine via OSC (Open Sound Control)\, and to the visual stage via WebSocket. Benchmark tests and pilot workshops examine how the architecture can preserve performer-led form while enabling audience aesthetic agency. Gestalt is released as an open-source platform for future interactive media creation. \n 
URL:https://icmc2026.ligeti-zentrum.de/event/paper-session-7b-interactive-media/
LOCATION:Hamburg University of Technology\, Building H\, Ditze Hörsaal (H 0.16)\, Am Schwarzenberg-Campus 5\, Hamburg\, 21073\, Germany
CATEGORIES:14-05,Paper Session,Session
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260514T090000
DTEND;TZID=Europe/Amsterdam:20260514T103000
DTSTAMP:20260513T234817
CREATED:20260422T135320Z
LAST-MODIFIED:20260512T072400Z
UID:10000224-1778749200-1778754600@icmc2026.ligeti-zentrum.de
SUMMARY:Paper Session 7a: Signal Processing I
DESCRIPTION:Session Chair: Guilherme Coehlo\n  \nPaper abstracts\nNeal Anderson and Sanjay Majumder: “MBHD: A Modular Audio Playback and Manipulation System for Loop-Based Performance”\nThis paper outlines the development of MBHD (Modular Beat Handling Device)\, a real-time audio performance system using Cycling ’74 Max that connects the reliability of DJing with the expressiveness of live electronic composition. While DAWs provide reliable synchronization\, achieving a harmonically coherent alignment of loops from different library collections typically requires significant manual editing of metadata. MBHD addresses this challenge with a new\, lightweight naming convention based on filename-encoded musical attributes (tempo\, root note\, and instrument role)\, providing automatic harmonic coherence. This system is organized around four independent layers of musical content that can be reused and rerouted (drums\, bass\, harmony\, and melody); and utilizes real-time digital signal processing (DSP) to dynamically adjust the pitch and timing of each layer so as to align to a global key and tempo. In addition to the description of the system’s architecture\, we also describe the integration of the system with external environments (via Ableton Link) and the design of the user interface to allow for minimal latency during the performance process. Lastly\, we report results of evaluations (technical benchmark\, user study) of the MBHD\, which demonstrate how transparent systems using filename-driven architectures can be used to facilitate the use of loops for improvisation. \nThe Max patches for this project can be accessed at: phewsh.com/mbhd/max/. Additionally\, a browser-based companion application is available at: phewsh.com/mbhd/. \n  \nSam Pluta and Ted Moore: “The MMMAudio Computer Music Environment”\nWe introduce MMMAudio\, a new audio creative coding environment designed to close the gap between instrument building and low-level DSP development while reducing the maintenance burden typical of monolithic\, compiled systems. Contemporary computer music languages such as Max\, Pure Data\, and SuperCollider excel at graph-based instrument design but impose steep barriers when custom DSP is required\, pushing users into C/C++ plugin workflows with unfamiliar APIs\, build systems\, and cross-platform complexities. MMMAudio addresses these issues by centering its programming model on Mojo for high-performance DSP and seamless Python–Mojo interoperability for tooling\, AI\, and scientific libraries. In MMMAudio\, unit generators (UGens) are simple Mojo structs\, enabling users to write\, test\, and distribute new UGens without leaving their code editor or contending with external build pipe-lines. This design simultaneously encourages new DSP creation\, leverages Python’s mature ecosystem for machine learning and data processing\, and exploits Mojo’s performance features (e.g.\, SIMD) for fast\, real-time audio processing. We present the system’s architecture\, programming model\, and extension mechanisms.\nTian Cheng\, Tomoyasu Nakano and Masataka Goto: “Exploring Masked CE Losses to Enhance Word Offset Estimation in CTC-based Lyrics-to-Audio Alignment”\nLyrics-to-audio alignment is an important task for real-world applications such as karaoke systems. Despite alignment performance improved with the release of large datasets and the utility of advanced deep learning models\, accurate word offset estimation remains challenging.\nTo address this problem\, we extend our previously proposed masked cross-entropy (CE) loss by proposing new masks to enforce model predictions at masked frames with frame-wise phoneme labels derived from word-level annotations. We train a Convolutional Recurrent Neural Network (CRNN) by using both the masked CE loss and the Connectionist Temporal Classification (CTC) loss. By comparing the results obtained by using different masks in the masked CE loss\, we find that word offset estimation performance is improved by using masks which cover all silent frames. In addition\, we find that masks on word onset frames are essential for improving word onset estimation performance. We achieve comparable word onset estimation results and provide benchmark word offset estimation results for future research.\n 
URL:https://icmc2026.ligeti-zentrum.de/event/paper-session-7a-signal-processing-i/
LOCATION:Hamburg University of Technology\, Building H\, Audimax 1\, Am Schwarzenberg-Campus 5\, Hamburg\, 21073\, Germany
CATEGORIES:14-05,Paper Session,Session
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
END:VCALENDAR