BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//ICMC HAMBURG 2026 - ECPv6.15.20//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:ICMC HAMBURG 2026
X-ORIGINAL-URL:http://icmc2026.ligeti-zentrum.de
X-WR-CALDESC:Events for ICMC HAMBURG 2026
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:Europe/Amsterdam
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20250330T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20251026T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20260329T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20261025T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20270328T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20271031T010000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260516T150000
DTEND;TZID=Europe/Amsterdam:20260516T180000
DTSTAMP:20260513T201811
CREATED:20260421T193915Z
LAST-MODIFIED:20260511T151015Z
UID:10000198-1778943600-1778954400@icmc2026.ligeti-zentrum.de
SUMMARY:Installation Showcase
DESCRIPTION:Program Overview\n\nEmpty Sets\nMichael Trommer \nPast Tense – Exploring Idleness and Boredom as Compositional Strategies \nJulian Rubisch \nSynthPlay: The Huggable Synthesizer\nXingxing Yang \nFlower.Mirror \nYuxin Chen \nMirRaspiju\nEmanuele Sara \nF L W R: A generative audiovisual work of small file media\nArne Eigenfeldt\, Jim Bizzocchi and Simon Overstall \nThe Emille Bell: Resonance of Absence and the Abyss.\nYongwoo Lee \n  \nAbout the pieces & artists\nMichael Trommer: Empty Sets\nEmpty Sets is an audio-led ambisonic\, 3D-animated virtual reality installation that situates the auditor within the depleted landscape of the technological sublime. Environments are vacated; drones and Cybertrucks patrol within a perpetual penumbra as missives of an unknown\, unseen power; billboards stand as eerily emptied ciphers\, their marketing-speak unmoored\, haunting the liminal spaces of a topography that is characterized by\, as media theorist Sean Cubitt puts it\, a “becoming-environment of information” (Cubitt 2013\, 489). Empty Sets can be configured as either a headset-based or dome-based installation. \nAbout the artist\nMichael Trommer is a Toronto-based sound and video artist; his practice has been focused primarily on psychogeographical and acoustemological explorations of anthropocentric space via the use of spatial and tactile sound\, field recordings\, VR\, immersive installation and expanded cinema. He has released material on an unusually diverse roster of labels\, both under his own name as well as ‘sans soleil’. These include Transmat\, Wave\, Ultra-red\, and/OAR\, Audiobulb\, Audio Gourmet\, Gruenrekorder\, Impulsive Habitat\, Stasisfield\, Serein\, Flaming Pines\, 3leaves\, Unfathomless and con-v. His audio-visual installation work has been exhibited at Australia’s Liquid Architecture festival\, Kunsthalle Schirn in Frankfurt\, Cordoba’s art:tech\, St. Petersburg’s Gamma Festival\, and Köln’s soundLAB\, among others. Michael has performed extensively in North America\, Europe and Asia\, including events with members of Berlin’s raster-noton collective\, as well as the 2008 and 2013 editions of Mutek’s acclaimed a/visions series. He also regularly improvises with Toronto-based AI audio-visual collective ‘i/o media’. In addition to teaching graduate sound design and sound art at George Brown College\, Michael also teaches Sound Film at Toronto Metropolitan University\, Think Tank at OCAD University and Media Practice and Sonic Cinema at York University\, where he is a PhD graduate and SSHRC Joseph-Armand Bombardier doctoral scholar in Cinema and Media Art. \n  \nJulian Rubisch: Past Tense – Exploring Idleness and Boredom as Compositional Strategies \nPast Tense is a participatory\, long-term sound installation inspired by “idle” games—systems that evolve on their own and invite only occasional interaction. Small sounds captured from the environment\, electronics\, radio\, and visitors slowly accumulate\, forming a growing sonic mass. Participants may briefly intervene\, releasing short sonic gestures be-fore the system settles back into its autonomous flow. This process\, informed by the idea that such games can reveal a “worldness” (as opposed to “gameness” characterized by a flow state) where players transcend the game mechanics to find new ways of self-expression\, allows a self-referential musical memory to unfold from a small kernel of inputs over time and past sonic material to re-enter and reshape future states. \nAbout the artist\nJulian Rubisch\, born in Vienna in 1981\, is a freelance sound artist\, software engineer\, and electroacoustic composer. \nHe has presented performances and installations at prestigious venues such as ZKM Karlsruhe\, Ars Electronica\, Francisco Carolinum Linz\, Alte Schmiede Vienna\, ORF Ö1 Kunstradio\, and others. \nHe loves to explore boundaries and interfaces:\n– Between sound art and music\,\n– Between order and chaos\,\n– Of perception and the different realms and phenomenologies of listening \n  \nXingxing Yang: SynthPlay: The Huggable Synthesizer\nUkuPlay reimagines the ukulele’s interaction model by leveraging the unique material affordances of e-textiles to create a soft\, monolithic\, and huggable interface. By utilizing a deformable textile structure\, the system establishes a tactile feedback loop that integrates the instrument’s sensing capabilities directly into its physical form. Through the embedding of capacitive and piezoresistive properties into a plush cushion\, UkuPlay transcends its role as a domestic object to function as a high-fidelity\, expressive instrument.\nVisitors are invited to explore this blur between “comfort-oriented soft goods” and “performance hardware”. Through intuitive gestures—such as strumming the fabric and fretting the neck—the installation demonstrates how e-textile “smart matter” captures nuanced\, multi-dimensional performance data. UkuPlay offers a vision of future musical tools where tangible intimacy and expressive power coexist seamlessly. \nAbout the artist\nXingxing Yang is an interdisciplinary computer musician based in Hong Kong. She is a Ph.D. student at HKUST\, specializing in computer-assisted audio\, music\, and haptics. She received her bachelor’s and master’s degree in Music Tech from the Shanghai Conservatory of Music and Stanford University. She is interested in making novel sound toys\, doing AI-assisted music composition\, and building VR storytelling experiences\,s and constructive tools for builders. \n  \nYuxin Chen: Flower.Mirror \nFlower.Mirror is a poetry-driven multimedia interactive game that explores the interplay of poetry\, sound\, and visual space. The program guides its player through dream-like landscapes\, where the player puzzles through and trigger stages of the game by constructing texts into poetic phrases. Instead of a goal-oriented structure\, the experience unfolds in a meditative and contemplative state\, allowing each gesture\, sound\, and transition to take its time. The player is invited into a slower mode of exploration\, one that emphasizes listening\, patience\, and reflection\, and gradually enters the inter-cultural narrative embedded within the project. \nAs its title suggests\, Flower.Mirror unfolds across two chapters: “Mirror” and “Flower\,” where each is a multi-staged miniature poetic game. “Mirror” reflects a period in the artist’s life marked by anemoia—the reconstruction of memory around an idealized hometown that never truly existed\, formed during her first experience studying abroad. The player engages with language as a compositional material\, modulating nouns with verbs and adjectives by aligning words hori-zontally\, as if writing a poem in the space. Decisions regarding text placement directly shape the sound: the size of a selected text determines its associated sound’s volume\, while the drop location of the text affects its stereo panning. \n“Flower” represents a stage of reconciliation. In this chapter\, the artist quietly looks back on her childhood self\, recogniz-ing the beauty of uncertainty and unknowing\, and learning to embrace her past and present selves. These personal narra-tives are presented not as fixed stories\, but as fragments—moments that surface\, dissolve\, and reappear through interac-tion. Across multiple stages\, players align words to initiate musical development before transitioning into a navigational mode\, where hidden words are discovered and collected within a darkened visual field. The interaction becomes increas-ingly embodied\, shifting from writing to movement and presence. \nThe ideas of “Flower” and “Mirror” are deeply intertwined in traditional Chinese culture. The project draws inspiration from Dream of the Red Chamber\, where the concept of 镜花水月 (“Mirror\, Flower\, Water\, Moon”) is a recurring theme. Often translated as “flower in the mirror\, moon on the water\,” the phrase describes a beauty that is vivid yet transient. Through this lens\, the project observes the worldly concerns tied to intercultural identity\, how they evolve over time\, rise and fall in cycles of yin and yang\, and eventually soften into reconciliation. Our life\, like the flower in the mirror\, is filled at times with loneliness and sorrow\, at others with joy and tenderness\, yet remains a fleeting experience. \nAbout the artist\nMichelle Chen (Yuxin Chen\, and also known as Morning Close) is a composer and interactive-media artist whose work spans installations and soundtracks for VR\, game\, animation\, and film. Among the projets she worked on as lead\, indie game “Displacemen” was nominated as the Best Student Game at GDC IGF 2025. \nMichelle is currently a master’s student at Stanford’s Center for Computer Research in Music and Acoustics (CCRMA)\, where she explores the interplay of text\, visual music\, and a humanistic ap-proach to AI as a creative tool. Her practice is characterized by nonlinear storytelling and a poetic sen-sibility that runs through both her music and interactive media works. Across these pieces\, she often constructs a serene\, contemplative world that gradually reveals an underlying contrasting or dual nature. \n  \nEmanuele Sara: MirRaspiju\nMirRaspiju is an interactive sound installation that transforms a physical mirror into a performative sonic interface. The work explores the relationship between self-perception\, gesture\, and sound through an object that is simultaneously visual\, acoustic\, and responsive. A one-way mirrored glass panel becomes both a reflective surface and a sound-emitting body\, allowing the visitor to see themselves while generating sound through their own presence and gestures.\nThe installation consists of a unidirectional mirror mounted on a wall or pedestal\, behind which a camera is concealed. The mirror itself is equipped with contact transducers\, enabling the glass surface to vibrate and act as the sole sound source. When no one is present\, the system remains silent; sound emerges only when a person approaches and looks into the mirror\, establishing an intimate\, one-to-one interaction.\nGestural and facial data are captured in real-time via computer vision techniques implemented in Python using OpenCV and MediaPipe. Hand movements\, facial expressions\, mouth openings \, and eye closures are mapped to parameters controlling the playback and transformation of four preloaded audio buffers. These buffers consist of granular textures created in Max/MSP and processed through an original algorithm based on elastic sound transformation\, where rhythmic and timbral structures are continuously deformed by varying buffer playback speed and modulation parameters.\nThe core audio engine is developed in Max/MSP and exported to RNBO\, then compiled into native C++ code running on a Raspberry Pi 5. This embedded architecture allows the installation to operate autonomously\, without external computers\, ensuring low latency and stable performance in exhibition contexts. The entire system is self-contained and battery-powered\, facilitating flexible installation logistics.\nEach buffer is associated with a specific type of interaction: hand gestures modulate playback speed\, amplitude\, filtering\, and waveform modulation; mouth openings control texture density and temporal flow; eye closures activate reverberation parameters. These interactions generate a continuously evolving soundscape that responds directly to the visitor’s bodily presence\, transforming the act of looking at oneself into a compositional gesture.\nMirRaspiju positions the mirror as a liminal medium between vision and sound\, presence and transformation. Rather than functioning as a spectacle-driven interface\, the work emphasizes minimal\, introspective interaction\, inviting visitors to listen to their own reflected image. The installation proposes a form of embodied listening in which perception\, gesture\, and sound are inseparably linked\, transforming a familiar object into a subtle yet expressive musical instrument. \nAbout the artist\nEmanuele Sara is an Electroacoustic Composition Bachelor student at the Conservatoire “Luigi Canepa” in Sassari under the guidance of professor Walter Cianciusi\, then attending specialization courses in music applied to images and pop compositions held by Centro Sperimentale di Cinematografia Roma\, Centro Europeo di Toscolana scuola di Mogol\, various seminars such as Sergi Jordà\, Silvia Lanzalone\, Riccardo Mantelli\, he leads his sound experiments into the field of electronic music creating his own code of signs\, acoustics and visual symbols.\nBorn in Ossi\, Italy (1986) in addition to composing electronic music he performs as a singer-songwriter since 2008 with several albums released. Sara’s compositions have been performed in various festivals and events\, including Conservatoire “Canepa” Sassari\, Conservatoire “Respighi” Latina\, Conservatoire “Palestrina” Cagliari\, Festival Spazio Musica Cagliari.\nAs interpreter Sara wears a pseudonym “Namowam” and produced the electronics for several contemporary composition. \n  \nArne Eigenfeldt\, Jim Bizzocchi and Simon Overstall: F L W R: A generative audiovisual work of small file media\nA generative audio-visual work of small file media. Five videos consisting of a slow pan over flowers\, ranging in dura-tion from 41 to 72 seconds\, were compressed using five different algorithms to create 25 files of a size less than 1.44 MB each. This database is accessed by a generative audiovisual system to create scenes of five related videos\, slowly fading between them at a playback ranging from 11% to 33%. Each scene was combined with a generative audio stretched an equal amount. The end result of F L W R is a generative sampling of the visual poetics of video compression. \nAbout the artists\nArne Eigenfeldt is a composer of live electroacoustic music\, and a researcher into intelligent generative music systems. His music has been performed around the world\, and his collaborations range from Persian Tar masters to free improvisers to contemporary dance companies to musical robots. He has presented his research at major conferences and festivals\, and published over 60 peer-reviewed papers on his research and collaborations. \nJim Bizzocchi is a filmmaker currently working in video art and installation. His research interests include the aesthetics and design of the moving image\, interactive narrative\, and the development of computational video sequencing systems. He is interested in the effect of new technologies on cinematic visual expressions such as split-screens\, layered imagery\, image transitions\, and stereoscopic cinema. \nSimon Lysander Overstall is a computational media artist\, and musician/composer from Vancouver\, Canada. He develops works with generative\, interactive\, or performative elements. He is particularly interested in computational creativity in music\, physics-based sound synthesis and performance in virtual environments\, and biologically and ecologically inspired art and music systems. He has produced custom performance systems and interactive art installations that have been shown in Canada\, the US\, Europe\, and China. \n  \nYongwoo Lee: The Emille Bell: Resonance of Absence and the Abyss.\nSome sounds can no longer be struck. Under the names of preservation\, protection\, and history\, they have been removed from direct experience. The Emille Bell remains as form\, sound\, and legend. Its story—of a monk sacrificing an infant in pursuit of a “good sound”—has shaped a normative understanding of what a bell should sound like\, while obscuring the social contexts and repeated failures embedded in its making.\nThis work explores an alternative way of engaging with sonic heritage under such conditions. It reconstructs the form and sound of the bronze Emille Bell through physical modeling\, while simultaneously presenting an imaginary bell that never existed: the bell that failed to be realized within the legend itself. The Bell of Absence emerges from narrative omissions\, distortions\, and the traces of failure that have been historically excluded.\nIn this project\, physical modeling is not treated as a tool of faithful reproduction. Instead\, it functions as a speculative medium. Physical parameters—such as shape\, material\, and resonance—are not optimized for acoustic accuracy but are used as compositional materials through which absence and deviation can be articulated. The Bell of Absence is intentionally configured with unstable structures and low-frequency resonances\, producing sounds that diverge from traditional criteria of bell timbre.\nHere\, absence does not signify silence or lack. It is understood as a structural condition shaped by historical and narrative choices. Resonance becomes the means through which this absence remains audible—as vibration\, instability\, and persistence rather than resolution. Rather than reproducing an idealized sound\, this work listens to what was never allowed to fully emerge\, allowing failed and omitted sounds to resonate in the present. \nAbout the artist\nYongwoo Lee is a composer deeply interested in the humanities and aesthetics. He studied history and fusion culture content development as an undergraduate\, with a minor in composition. His musical perspective has been shaped by diverse life experiences\, including work as a student researcher at the History and Culture Archive Center and CREAMA (Center for Research in Electro-Acoustic Music and Audio)\, as well as his roles as a Conscripted Firefighters Agent (CFA) and cultural interpreter. Through these experiences\, he approaches composition as a process of transforming lived experience into sound\, while also engaging with technical research\, particularly in integrating physical modeling techniques into his compositional practice.
URL:http://icmc2026.ligeti-zentrum.de/event/installation-showcase/
LOCATION:Hamburg University of Technology\, Building H (H 0.03)\, Am Schwarzenberg-Campus 5\, Hamburg\, 21073\, Germany
CATEGORIES:16-05,Installation,Showcase
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260516T150000
DTEND;TZID=Europe/Amsterdam:20260516T180000
DTSTAMP:20260513T201811
CREATED:20260423T142012Z
LAST-MODIFIED:20260423T190112Z
UID:10000227-1778943600-1778954400@icmc2026.ligeti-zentrum.de
SUMMARY:Innovation Showcase
DESCRIPTION:Takayuki Rai and Haruka Hirayama: Motion Capture: Mocopi and Sound Interaction in Max\nThis paper presents a Local Space–based motion Analysis framework using Sony’s Mocopi motion capture system\, implemented in the Max environment for real-time Audio interaction. The framework is designed for live Performance and interactive demonstration. In previous work\, joint position data transmitted from Mocopi were converted into World Space coordinates and applied to audio and visual interaction. While this approach enabled various interaction designs\, it revealed a fundamental limitation: identical body movements produced different coordinate values depending on the performer’s facing direction. This weakened the intuitive correspondence between bodily sensation and control data\, particularly in performance and improvisational contexts. To address this issue\, the present study introduces a Local Space representation defined relative to the performer’s body orientation. Joint positions are transformed from World Space into a forward-facing\, body-relative coordinate frame that moves and rotates with the performer. This enables consistent detection of body movements regardless of orientation on stage\, preserving the perceptual relationship between physical action and control data. Based on this framework\, several Max external objects were developed to estimate body orientation\, convert Joint positions into Local Space\, and compute motion Features such as movement distance\, direction\, and angular change. Application examples demonstrate that movement-based Audio control becomes more stable and intuitive in Local Space. The system was evaluated through Poster + Demo presentations with a Mocopi-equipped performer\, highlighting its suitability for interactive performance and artistic applications. \n  \nWalker Smith: The Magic Alchemical Drum Set: a transducer-driven light-up drum set using timbres and scales derived from sonified chemical element spectra\nThe Magic Alchemical Drum Set is an interactive audiovisual instrument that integrates three lines of preliminary research: (1) the construction of element-specific Timbres using sonified spectral data and perceptually motivated transformations\, (2) the design of unequal-tempered microtonal scales derived from elemental spectra and implemented on a LumaTone keyboard\, and (3) a transducer-driven drum set that physically couples these sounds to acoustic percussion instruments and synchronized lighting. Together\, these components form a system that transforms static spectroscopic data into a playable\, performative instrument emphasizing tactile interaction and audiovisual correspondence. The paper provides a Brief overview of related work\, outlines the design considerations underlying the scales and timbres\, and documents the construction and use of the drum set in both compositional and interactive installation contexts\, including Feedback from participants. A detailed demo Video is provided along with all necessary code. Conclusions and future work in the areas of scale and timbre design\, as well as interactive audiovisual instrument design\, are presented. \n  \nMatthias Jung: Incisions: Tangible Latent Space Exploration with Three Sound Balls\nThis system suggests an interactive\, tactile approach to exploring machine learning models collaboratively in real-time. The system design is a work-in-progress and at this stage connects three handheld\, spherical devices (sound balls) to three machine learning models. The Sound balls are equipped with pressure sensors and gyroscopes\, that are sending readings via an ESP32 via OSC over WiFi to a Max/MSP patch that is hosting the model playback. The patch uses different open-source and self-trained models that are then mixed into a master playback audible via headphones by the three sound ball players\, who will explore the models via a latent dimension Setup collaboratively. \n  \nKieran McAuliffe\, Ornella Tortorici and Ali Elnwegy: Robotics for Digital Artists: OSC-ROS Integration\nThe Robot Operating System (ROS) has become a de facto standard for robot software development\, offering powerful tools for real-time communication\, control\, and simulation. However\, its complexity presents significant barriers for multimedia artists and creative practitioners. In contrast\, the accessible Open Sound Control (OSC) is widely adopted in the creative coding community and supported by numerous artistic software environments. This demo showcases a prototype OSC–ROS bridge designed to lower the entry barrier for artists working with robotic systems. It receives messages from the user in the form of OSC\, and converts them into joint trajectories which it sends over ROS. Participants in the demo can interact with two setups: controlling a custom-built painting robot and sonifying the motion of an industrial robot arm. These applications highlight how robotic systems can function both as expressive actuators and as performative interfaces. \n  \nCharles Hutchins and Shelly Knotts: SCMoo: A Live Codeable VR Environment\nAfter the loss of Mozilla Hubs and the end of most Metaverse hype\, we present a retro\, text- and sound- based VR platform for live coding interactive music in SuperCollider\, which is accessible\, enjoyable and lower carbon than polygon-based systems. In the 1990s text-based MUDs (Multi-User Dungeons) and MOOs (MUDs Object Oriented) were inhabited by hundreds of users. The communities in these spaces could design any avatars they wanted\, which could perform any actions they could describe (limited only by imagination and language) as the medium itself was text. MOOs provided all users with the possibility to add objects\, rooms\, actions\, behaviours and other features to the environment through object-oriented programming. The collaboratively built VR environment was live coded by the users who built features through iterative design within the shared platform. This demo presents SCMoo\, which is a reimplementation of a LambdaMOO-like system\, written in the musical programming language SuperCollider. SCMoo is a multi-user platform for sound making and role play. \n  \nJuliana Lüer\, Christoph Salje and Prof. Dr.-Ing. Thorsten A. Kern: Controlling Musical Parameters in Neurorehabilitation witha Haptic Finger Tracker”\nPatients in neurorehabilitation often face not only severe motor impairments\, but also associated psychological problems. Music therapy can make a valuable supplement to purely verbal psychotherapy\, but its use is limited as patients often cannot play conventional musical Instruments due to motor skill limitations. This can hinder the psychological recovery\, where musical expression is essential. \nTo address this\, the Haptic Finger Tracker was developed\, emerging from a project at Institute XXX\, a collaborative initiative where researchers and artists work on interdisciplinary projects. This paper describes a prototype that transforms minimal finger movements into sound\, accompanied by corresponding haptic sensations. Technically\, the device uses flex sensors and an inertial measurement unit (IMU) to capture a range of small-scale finger movements. Using the Open Sound Control (OSC) protocol\, these captured gestures are then translated to Control musical elements such as pitch\, volume and arpeggios. Simultaneously\, a vibrotactile actuator provides haptic feedback aimed at enhancing the user’s sense of Engagement and embodiment. The resulting prototype is a portable\, user-friendly device that empowers patients by providing a creative outlet and fostering a sense of self-efficacy. This work establishes a technical foundation for future neurorehabilitative tools that utilize multisensory feedback to improve patient outcomes. \n  \nLuca Morino\, Nicola Conci and Fabio Cifariello Ciardi: B3-H4RSH: A Noise-based Multiplayer Game for Mobile Music-Making\nOver the past two decades\, artists and composers have increasingly explored mobile phones — ubiquitous and accessible devices — as instruments for music Performance and\, in particular\, as interfaces for audience participation and collaborative music-making. This paper presents B3-H4RSH\, an interactive mobile music system. Implemented as a web application for smartphone browsers on a co-located network\, the system interconnects participants’ devices\, employing competitive multiplayer mechanics to structure interdependencies among players and shape the music-making act within a noise-music paradigm. By influencing and responding to one another’s actions\, participants collectively diffuse sound throughout the space from their smartphones while competing to achieve the “harshest” sonic outcome – and win. \n  \nRiccardo Mazza: Translating Sonic Memories into Latent Performable Spacesfor Live Coding\nThis paper presents a live coding performance system that reconfigures autobiographical sound materials through real-time interaction with a machine learning process. Rather than treating sonic memories as fixed archival objects\, the system approaches memory as a dynamic and unstable process\, continuously reshaped during performance. Recorded sound fragments are analyzed using FluCoMa descriptors and organized within a navigable two-dimensional space. A lightweight autoencoder is employed not as a high-fidelity generative model\, but as a constrained transformation device that introduces controlled deviations\, thereby altering the relationship to the source recordings. The resulting sounds are not reproductions of the originals\, but transformed traces that require reinterpretation in real time. Within this framework\, performance becomes a negotiation between intention\, algorithmic transformation\, and emergent sonic behavior. The performer does not retrieve memories\, but actively reshapes them\, generating new memory traces through interaction. The system adopts a human-in-the-loop approach\, in which the model acts as a mediating structure rather than an autonomous agent. The contribution of this work lies not in technical novelty\, but in proposing a practice-based perspective on how machine learning can function as a performative medium for memory transformation in live coding contexts. \n  \nMohammad Sadeghi: Architectures of Alteration: Designing and Integrating Hybrid Kinetic Robotic Systems and Light Choreography in Eternal Dawn\nContemporary performance increasingly relies on kinet-ic\, robotic\, and responsive environments that demand tightly integrated engineering systems capable of acting as expressive agents. Developing such hybrid systems contributes to new modes of staging\, embodiment\, and dramaturgy\, offering artists tools for creating dynamic environments that extend beyond the limitations of human gesture alone. This paper presents the design and inte-gration of two hybrid kinetic systems developed for the performance Eternal Dawn: a ceiling-mounted robotic arm and a motor-matrix architecture controlling sus-pended rectangular light frames. The robotic arm oper-ates as a supervisory and interactive entity\, shifting from analytical scanning to aggressive pendulum-like motion to intimate duet-like encounters. The motor-matrix system dynamically reconfigures the spatial geometry of the la-boratory\, synchronizing kinetic light choreography with sound and movement to construct adaptive architectural states. Synchronization with musical structures is achieved using Open Sound Control (OSC) messages ensuring accurate temporal coordination. The motors are controlled via a programmable logic controller (PLC) and a dedicated human–machine interface (HMI) manag-ing motion parameters\, sequencing\, and safety functions. The proposed systems proved effective as expressive ki-netic agents\, demonstrating a versatile platform for inte-grating robotic motion and dynamic light architectures into similarly experimental performance setting. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/innovation-showcase/
LOCATION:Hamburg University of Technology\, Building H (H 0.02)\, Am Schwarzenberg-Campus 5\, Hamburg\, 21073\, Germany
CATEGORIES:16-05,Showcase
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
END:VCALENDAR