BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//ICMC HAMBURG 2026 - ECPv6.15.20//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:ICMC HAMBURG 2026
X-ORIGINAL-URL:http://icmc2026.ligeti-zentrum.de
X-WR-CALDESC:Events for ICMC HAMBURG 2026
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:Europe/Amsterdam
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20250330T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20251026T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20260329T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20261025T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20270328T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20271031T010000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260511T170000
DTEND;TZID=Europe/Amsterdam:20260511T190000
DTSTAMP:20260429T104007
CREATED:20260421T091309Z
LAST-MODIFIED:20260423T185456Z
UID:10000148-1778518800-1778526000@icmc2026.ligeti-zentrum.de
SUMMARY:Workshop | Serge Lemouton\, Jacques Warnier\, Malena Fouillou\, and Laurent Pottier: Practical Documentation and Collaborative Preservation using Antony
DESCRIPTION:The goal of this hands-on workshop is to show\, for the first time in an international context\, the Antony system\, now in its final state and fully functional.\nThe Antony platform provides a structured system for archiving\, documenting\, and accessing mixed music works materials to ensure long-term preservation and reuse. The Antony project addresses the difficulty of preserving artistic works that rely on evolving and often incompatible technologies. It highlights how the survival of these works depends on a small group of experts capable of updating and maintaining their digital components.\nAt the end of this workshop\, the participants will be able to use the database to document\, distribute and preserve their own creations. \n  \nRequirements\nThis workshop primarily addresses composers\, computer music designers or performers\, but it can also be of interest for media artists\, musicologists\, documentalists and music publishers.\nThe participants should come with the media related to an existing artistic project of their own that they wish to editorialize and preserve. \n  \nWorkshop registration\nPlease register via Pretix in order to participate in the workshop. There are no additional costs.  \n  \n\nAbout the workshop facilitators\nSerge Lemouton \nComputer Music Designer – Institut de Recherche et Création Acoustique/Musique – Centre Georges Pompidou (IRCAM-CGP) \nSince 1992\, Serge Lemouton works as a computer music designer at IRCAM\, collaborates with researchers to develop computer tools and has taken part in the production and public performances of numerous composers’ musical projects. He is currently working on score following systems\, analysis of instrumental gesture and constraint programming for computer assisted composition. His current research work leads him to study the transmission and preservation of the computer music repertoire. \n  \nJacques Warnier \nResearch Engineer\, Ministry of Culture – Computer Music Realizer (RIM)\, Conservatoire National Supérieur de Musique et de Danse de Paris (CNSMDP) \nSince 2007\, Jacques Warnier has supported the composition and new technologies class at CNSMDP\, producing concerts and performing live electronics for mixed repertoire works. After earning the Saint-Etienne Master’s degree in Computer Music Design in 2015\, he joined the Ministry of Culture as a research engineer in 2016.\nHis role combines musicianship and engineering to create the artistic and technical conditions required for performing 20th- and 21st-century music involving audio-digital technologies. His research focuses on making this repertoire accessible to students: curating works by instrument\, acquiring scores and electronic parts\, cataloging them in the Hector Berlioz media library\, and preserving or reconstructing electronic components.\nHe is a member of the AFIM working group on “Collaborative Archiving and Creative Preservation” (since 2018)\, now “Antony\,” and participates in the Humanum consortium for digital musicology (Musica2) since 2022. \n  \nMalena Fouillou \nAn acoustic engineer and computer music producer\, Malena has had a wide-ranging career. After completing her higher education studies in acoustics\, she joined Ircam in 2022 and graduated with a master’s degree in ATIAM (Acoustics\, Signal Processing\, Computer Science for Music). It was only natural that she joined the Next ensemble of the Paris Conservatory\, in partnership with the Ensemble Intercontemporain. This training allowed her to study with distinguished RIMS professors such as Arshia Cont\, Augustin Müller\, and Andrew Gerszo\, and to perform works by Marco Stroppa\, Pierre Boulez\, Martin Matalon\, and others. Currently pursuing her PhD at Paris 8\, her research focuses on qualitative and\nquantitative descriptions of the spatiality of sound. She is part of a working group composed of Serge Lemouton (Ircam)\, Jacques Warnier (CNSMDP)\, Laurent Pottier (ECLLA-UJM) on the Antony project\, a collaborative platform for the preservation and sharing of musical heritage using digital technologies. \n  \nLaurent Pottier \nProfessor of Musicology & Computer Music at Jean Monnet University (Saint-Etienne-France)\, ECLLA laboratory \nLaurent Pottier is a professor of Musicology & Computer Music at UJM (Saint-Etienne University). He is the headmaster of the RIM (Réalisateur en Informatique Musicale / Computer Music Producer) professional Master and of the DIGICREA (Digital Creativity – Arts & Sciences) international EMJM Master. His research at the ECLLA laboratory\, Saint-Etienne University involves music using electronic and digital technologies. He taught at Ircam (1992-1996)\, then\nheaded the research department at GMEM in Marseille (1997-2005). As a RIM\, he has worked with many composers and in particular with J.-B. Barrière\, J. Chowning\, T. De Mey\, A. Liberovicci\, C. Maïda\, A. Markeas\, F. Martin\, T. Murail\, J.-C. Risset\, F. Romitelli\, K.T. Toeplitz. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/workshop-serge-lemouton-et-al-practical-documentation-collaborative-preservation-using-antony/
LOCATION:Hamburg University of Technology\, Building H (H 0.02)\, Am Schwarzenberg-Campus 5\, Hamburg\, 21073\, Germany
CATEGORIES:11-05,Workshop
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260512T103000
DTEND;TZID=Europe/Amsterdam:20260512T130000
DTSTAMP:20260429T104007
CREATED:20260421T115650Z
LAST-MODIFIED:20260423T174621Z
UID:10000163-1778581800-1778590800@icmc2026.ligeti-zentrum.de
SUMMARY:Workshop | Weiyi Dai: From Objects to Soundscapes: Participatory Spatial Composition through Data-Driven Multimodal Systems
DESCRIPTION:This workshop offers hands-on experience with Full House\, a multimodal soundscape system designed specifically for participatory spatial composition. Through a real time data driven architecture\, the system transforms the manipulation of physical objects into continuous spatial sound and visual processes.\nRather than teaching a fixed compositional style\, the workshop focuses on methodological thinking: sound is understood as an ongoing systemic behavior shaped by data\, space\, and interaction\, rather than a sequence of triggered musical events. It emphasizes the participatory creative cycle of experience–data–adjustment. Participants will learn how to integrate object based interaction\, mapping strategies\, and spatial audio technologies to construct intelligible\, inclusive\, and non repetitive soundscapes. \n  \nRequirements\nBasic computer literacy. To fully experience and understand the structural logic of the Full House system\, participants must bring their own laptop and headphones. Installation of TouchDesigner (TD) in advance is highly recommended; group on site installation will also be available. \n  \nWorkshop registration\nPlease register via Pretix in order to participate in the workshop. There are no additional costs.  \n  \nAbout the workshop facilitator\nWeiyi Dai is a composer\, sound artist\, and researcher working at the intersection of computer music\, spatial audio\, and multimodal interaction. He is an Associate Professor at the Shanghai Conservatory of Music\, School of Digital Media Art\, where his work focuses on soundscape systems\, participatory sound environments\, and the translation of artistic practice into technological platforms. His recent Projects explore object‐based interaction\, spatial sound rendering\, and data‐driven Sound libraries for immersive environments\, with applications in performance\, installation\, and education. His research has been presented in academic and artistic contexts related to computer music\, sound art\, and media technology. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/workshop-dai-weiyi-objects-soundscapes-participatory-spatial-composition/
LOCATION:Hamburg University of Technology\, Building H (H 0.02)\, Am Schwarzenberg-Campus 5\, Hamburg\, 21073\, Germany
CATEGORIES:12-05,Workshop
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260512T160000
DTEND;TZID=Europe/Amsterdam:20260512T180000
DTSTAMP:20260429T104007
CREATED:20260421T123923Z
LAST-MODIFIED:20260427T155441Z
UID:10000170-1778601600-1778608800@icmc2026.ligeti-zentrum.de
SUMMARY:Workshop | Thomas Meckel\, Dennis Scheiba and Moritz Wesp: SONA – Audio-VR
DESCRIPTION:Virtual Reality technologies remain largely inaccessible to blind and visually impaired People due to their strong reliance on visual interfaces. At the same time\, blind people possess highly developed skills in spatial orientation and navigation through sound. SONA takes this contradiction as its starting point and investigates how VR can be reconceptualized as an inclusive medium by utilising spatial audio as a primary navigation and interaction tool.\nSince 2023\, SONA has developed Audio-VR works in close collaboration with blind and visually impaired communities in Cologne and North Rhine-Westphalia. This work has produced two formats: SONA – Seeing Sound\, a large-scale performative VR installation in complete darkness\, where visitors navigate a motion-captured space using spatial audio\, sensory shoes\, and voice commands\, guided by a blind performer; and SONA – Diving in the Dark\, a mobile Audio-VR game in which players explore a virtual underwater world — guided entirely by 3D sound — using a smartphone and Headphones with head-tracking as a display-free VR headset.\nFrom a technical perspective\, the workshop addresses spatial audio not as an immersive effect but as a functional interface for navigation and interaction. We will discuss the Audio-VR navigation tools developed within the project — including virtual echolocation\, a virtual audio cane\, and virtual wall membranes — and the design decisions behind them. The technical stack (Unity\, Wwise/Audiokinetic\, head-tracking via smartphone sensors\, voice recognition and microphone input) will be presented alongside the iterative research process and our inclusive worldbuilding methods.\nThe workshop combines a presentation and discussion of the artistic and technical background with a hands-on opportunity to test SONA – Diving in the Dark. \nMore about SONA here.\nSONA Film here. \n  \nRequirements\nNone \n  \nWorkshop registration\nPlease register via Pretix in order to participate in the workshop. There are no additional costs.  \n  \nAbout the workshop facilitators\nThomas Meckel. Media artist\, game designer\, and musician. His interactive cinema performance Solaris has been presented internationally. Together with Tobias Thomas\, he curates the concert series Round at the Cologne Philharmonic. He studied Media Arts at the Academy of Media Arts Cologne\, Cultural Studies in Lüneburg\, and Applied Theatre Studies in Giessen. Since 2023\, he has been developing concepts for SONA and moderating its artistic realization. \nDennis Scheiba. Music informatician and Software developer from Cologne. He holds a Bachelor’s degree in Mathematics and a Master’s degree in Sound and Reality (Robert Schumann University of Music\, Düsseldorf). A core developer of the music software SuperCollider\, he has been a research and artistic assistant at the Robert\nSchumann University of Music Düsseldorf since 2023\, working in the field of artificial intelligence and Musical practice. At SONA\, he contributes expertise in AI\, sound\, and interface development—particularly for the AI seagull character and custom text-to-speech systems. \nMoritz Wesp. Game designer\, trombonist\, and media artist. As a trombonist\, he performs internationally with ensembles such as Matthias Muches’ Bonecrusher and Mariá Portugal’s Erosão. He develops his own electronic instruments (including a virtual trombone) and programs interactive music games. Moritz studied at the Lucerne School of Music and at the Cologne University of Music and Dance. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/workshop-thomas-meckel-et-al-sona-audio-vr/
LOCATION:Hamburg University of Technology\, Building H (H 0.02)\, Am Schwarzenberg-Campus 5\, Hamburg\, 21073\, Germany
CATEGORIES:12-05,Workshop
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260514T103000
DTEND;TZID=Europe/Amsterdam:20260514T123000
DTSTAMP:20260429T104007
CREATED:20260421T122130Z
LAST-MODIFIED:20260423T175115Z
UID:10000167-1778754600-1778761800@icmc2026.ligeti-zentrum.de
SUMMARY:Workshop | Rob Canning: Executable Scores: Embedded Cue Semantics and Animated SVG Notation with Oscilla
DESCRIPTION:This workshop introduces Oscilla\, a browser-based framework for animated\, cue-driven graphic scores in which performance semantics are embedded directly into SVG notation. Rather than separating score\, control system\, and playback environment\, Oscilla treats the score as a single executable surface—authored visually in Inkscape and enacted in real time via a lightweight browser runtime.\nParticipants will explore how temporal organization\, gesture fields\, navigation\, OSC-driven control\, and media cues can be authored directly into drawings using a compact microsyntax.\nThe session combines conceptual framing\, live demonstration\, and hands-on authoring of small examples.\nTopics include: temporal articulation (pauses\, speed shaping\, countdowns)\, animated gestures (rotation\, scaling\, path-following)\, navigational structures (pages\, sections\, probabilistic traversal)\, OSC integration for hybrid acoustic-electronic performance\, and tight network synchronisation for coordinating scores across multiple devices.\nBecause Oscilla blurs the line between interactive score and controller\, it can also be used to build custom performance interfaces—opening possibilities for both notation-driven and control-surface-driven workflows.\n\n  \nRequirements\n\nNo specialist technical background required.\nParticipants should bring a laptop with:\nInkscape (free\, available at https://inkscape.org)\nSafari\, Chrome\, or Firefox for viewing scores\nHeadphones optional but useful for audio cue examples\n\nSource code and binaries are available here.\n\n\n\n  \nWorkshop registration\nPlease register via Pretix in order to participate in the workshop. There are no additional costs.  \n  \n\nAbout the workshop facilitator\nRob Canning (Dublin 1974) is an composer\, improviser\, and creative technologist whose work explores animated notation\, improvisation\, and the dynamics of networked musical systems. He holds a PhD in composition from Goldsmiths\, University of London\, where his research examined distributed authorship in computer-aided music. A long-time advocate of Free and Open Source Software\, he develops Oscilla\, an open-source platform for animated graphic notation and networked performance.\n\n 
URL:http://icmc2026.ligeti-zentrum.de/event/workshop-rob-canning-executable-scores-embedded-cue-semantics-and-animated-svg-notation-with-oscilla/
LOCATION:Hamburg University of Technology\, Building H (H 0.02)\, Am Schwarzenberg-Campus 5\, Hamburg\, 21073\, Germany
CATEGORIES:14-05,Workshop
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260514T160000
DTEND;TZID=Europe/Amsterdam:20260514T173000
DTSTAMP:20260429T104007
CREATED:20260421T112429Z
LAST-MODIFIED:20260423T174523Z
UID:10000160-1778774400-1778779800@icmc2026.ligeti-zentrum.de
SUMMARY:Workshop | Robert Cole Rizzi: The Art of Listening – To Hamburg. A Participatory Soundwalk Workshop
DESCRIPTION:How does the world sound when we truly listen? In this participatory workshop\, participants explore their sonic environment through attentive listening\, simple drawing\, poetry\, and playful composition — no musical experience required. \nThe workshop begins with a guided soundwalk\, where participants listen closely to everyday sounds while also making small line drawings based on visual details in the environment\, such as skylines\, patterns\, trees\, or bushes. Back indoors\, the listening experiences are transformed into short poetic texts\, and the drawings are translated into music using mechanical music boxes.\nThe workshop offers an accessible and hands-on introduction to sound-based creativity\, showing how artistic processes can emerge from listening\, observation\, and simple materials. Developed over more than ten years at the Danish National Academy of Music (SDMK)\, the format has been tested with a wide range of participants\, from schoolchildren to professional artists. \n  \nWorkshop registration\nPlease register via Pretix in order to participate in the workshop. There are no additional costs.  \n  \nAbout the workshop facilitator\nRobert Cole Rizzi is an assistant professor at the Danish National Academy of Music (SDMK) in Esbjerg\, where he teaches electronic music and sound art. His work focuses on making creative sound practice accessible to everyone\, developing pedagogical methods that welcome participants without requiring musical or technical prerequisites.\nAs a practicing artist working with sound\, visual art\, and experimental music technology\, Robert collaborates internationally with institutions such as the Prince Claus Conservatoire in the Netherlands\, and the Technische- and Musikhochschule in Lübeck. His artistic research explores how nature’s movements and traces can become sources for audiovisual composition\, using techniques from field recording to biodata sonification.\nRobert has spent the past decade developing and refining the “Impression – Imprint – Expression” methodology presented in this workshop. It has been successfully implemented with primary school students\, conservatory composition students\, public library visitors\, and museum participants across Denmark. His approach demonstrates that everyone can engage meaningfully with sound art when given accessible tools and encouragement to experiment. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/workshop-robert-cole-rizzi-art-listening-hamburg-soundwalk/
LOCATION:Hamburg University of Technology\, Building H (H 0.02)\, Am Schwarzenberg-Campus 5\, Hamburg\, 21073\, Germany
CATEGORIES:14-05,Workshop
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260516T150000
DTEND;TZID=Europe/Amsterdam:20260516T180000
DTSTAMP:20260429T104007
CREATED:20260423T142012Z
LAST-MODIFIED:20260423T190112Z
UID:10000227-1778943600-1778954400@icmc2026.ligeti-zentrum.de
SUMMARY:Innovation Showcase
DESCRIPTION:Takayuki Rai and Haruka Hirayama: Motion Capture: Mocopi and Sound Interaction in Max\nThis paper presents a Local Space–based motion Analysis framework using Sony’s Mocopi motion capture system\, implemented in the Max environment for real-time Audio interaction. The framework is designed for live Performance and interactive demonstration. In previous work\, joint position data transmitted from Mocopi were converted into World Space coordinates and applied to audio and visual interaction. While this approach enabled various interaction designs\, it revealed a fundamental limitation: identical body movements produced different coordinate values depending on the performer’s facing direction. This weakened the intuitive correspondence between bodily sensation and control data\, particularly in performance and improvisational contexts. To address this issue\, the present study introduces a Local Space representation defined relative to the performer’s body orientation. Joint positions are transformed from World Space into a forward-facing\, body-relative coordinate frame that moves and rotates with the performer. This enables consistent detection of body movements regardless of orientation on stage\, preserving the perceptual relationship between physical action and control data. Based on this framework\, several Max external objects were developed to estimate body orientation\, convert Joint positions into Local Space\, and compute motion Features such as movement distance\, direction\, and angular change. Application examples demonstrate that movement-based Audio control becomes more stable and intuitive in Local Space. The system was evaluated through Poster + Demo presentations with a Mocopi-equipped performer\, highlighting its suitability for interactive performance and artistic applications. \n  \nWalker Smith: The Magic Alchemical Drum Set: a transducer-driven light-up drum set using timbres and scales derived from sonified chemical element spectra\nThe Magic Alchemical Drum Set is an interactive audiovisual instrument that integrates three lines of preliminary research: (1) the construction of element-specific Timbres using sonified spectral data and perceptually motivated transformations\, (2) the design of unequal-tempered microtonal scales derived from elemental spectra and implemented on a LumaTone keyboard\, and (3) a transducer-driven drum set that physically couples these sounds to acoustic percussion instruments and synchronized lighting. Together\, these components form a system that transforms static spectroscopic data into a playable\, performative instrument emphasizing tactile interaction and audiovisual correspondence. The paper provides a Brief overview of related work\, outlines the design considerations underlying the scales and timbres\, and documents the construction and use of the drum set in both compositional and interactive installation contexts\, including Feedback from participants. A detailed demo Video is provided along with all necessary code. Conclusions and future work in the areas of scale and timbre design\, as well as interactive audiovisual instrument design\, are presented. \n  \nMatthias Jung: Incisions: Tangible Latent Space Exploration with Three Sound Balls\nThis system suggests an interactive\, tactile approach to exploring machine learning models collaboratively in real-time. The system design is a work-in-progress and at this stage connects three handheld\, spherical devices (sound balls) to three machine learning models. The Sound balls are equipped with pressure sensors and gyroscopes\, that are sending readings via an ESP32 via OSC over WiFi to a Max/MSP patch that is hosting the model playback. The patch uses different open-source and self-trained models that are then mixed into a master playback audible via headphones by the three sound ball players\, who will explore the models via a latent dimension Setup collaboratively. \n  \nKieran McAuliffe\, Ornella Tortorici and Ali Elnwegy: Robotics for Digital Artists: OSC-ROS Integration\nThe Robot Operating System (ROS) has become a de facto standard for robot software development\, offering powerful tools for real-time communication\, control\, and simulation. However\, its complexity presents significant barriers for multimedia artists and creative practitioners. In contrast\, the accessible Open Sound Control (OSC) is widely adopted in the creative coding community and supported by numerous artistic software environments. This demo showcases a prototype OSC–ROS bridge designed to lower the entry barrier for artists working with robotic systems. It receives messages from the user in the form of OSC\, and converts them into joint trajectories which it sends over ROS. Participants in the demo can interact with two setups: controlling a custom-built painting robot and sonifying the motion of an industrial robot arm. These applications highlight how robotic systems can function both as expressive actuators and as performative interfaces. \n  \nCharles Hutchins and Shelly Knotts: SCMoo: A Live Codeable VR Environment\nAfter the loss of Mozilla Hubs and the end of most Metaverse hype\, we present a retro\, text- and sound- based VR platform for live coding interactive music in SuperCollider\, which is accessible\, enjoyable and lower carbon than polygon-based systems. In the 1990s text-based MUDs (Multi-User Dungeons) and MOOs (MUDs Object Oriented) were inhabited by hundreds of users. The communities in these spaces could design any avatars they wanted\, which could perform any actions they could describe (limited only by imagination and language) as the medium itself was text. MOOs provided all users with the possibility to add objects\, rooms\, actions\, behaviours and other features to the environment through object-oriented programming. The collaboratively built VR environment was live coded by the users who built features through iterative design within the shared platform. This demo presents SCMoo\, which is a reimplementation of a LambdaMOO-like system\, written in the musical programming language SuperCollider. SCMoo is a multi-user platform for sound making and role play. \n  \nJuliana Lüer\, Christoph Salje and Prof. Dr.-Ing. Thorsten A. Kern: Controlling Musical Parameters in Neurorehabilitation witha Haptic Finger Tracker”\nPatients in neurorehabilitation often face not only severe motor impairments\, but also associated psychological problems. Music therapy can make a valuable supplement to purely verbal psychotherapy\, but its use is limited as patients often cannot play conventional musical Instruments due to motor skill limitations. This can hinder the psychological recovery\, where musical expression is essential. \nTo address this\, the Haptic Finger Tracker was developed\, emerging from a project at Institute XXX\, a collaborative initiative where researchers and artists work on interdisciplinary projects. This paper describes a prototype that transforms minimal finger movements into sound\, accompanied by corresponding haptic sensations. Technically\, the device uses flex sensors and an inertial measurement unit (IMU) to capture a range of small-scale finger movements. Using the Open Sound Control (OSC) protocol\, these captured gestures are then translated to Control musical elements such as pitch\, volume and arpeggios. Simultaneously\, a vibrotactile actuator provides haptic feedback aimed at enhancing the user’s sense of Engagement and embodiment. The resulting prototype is a portable\, user-friendly device that empowers patients by providing a creative outlet and fostering a sense of self-efficacy. This work establishes a technical foundation for future neurorehabilitative tools that utilize multisensory feedback to improve patient outcomes. \n  \nLuca Morino\, Nicola Conci and Fabio Cifariello Ciardi: B3-H4RSH: A Noise-based Multiplayer Game for Mobile Music-Making\nOver the past two decades\, artists and composers have increasingly explored mobile phones — ubiquitous and accessible devices — as instruments for music Performance and\, in particular\, as interfaces for audience participation and collaborative music-making. This paper presents B3-H4RSH\, an interactive mobile music system. Implemented as a web application for smartphone browsers on a co-located network\, the system interconnects participants’ devices\, employing competitive multiplayer mechanics to structure interdependencies among players and shape the music-making act within a noise-music paradigm. By influencing and responding to one another’s actions\, participants collectively diffuse sound throughout the space from their smartphones while competing to achieve the “harshest” sonic outcome – and win. \n  \nRiccardo Mazza: Translating Sonic Memories into Latent Performable Spacesfor Live Coding\nThis paper presents a live coding performance system that reconfigures autobiographical sound materials through real-time interaction with a machine learning process. Rather than treating sonic memories as fixed archival objects\, the system approaches memory as a dynamic and unstable process\, continuously reshaped during performance. Recorded sound fragments are analyzed using FluCoMa descriptors and organized within a navigable two-dimensional space. A lightweight autoencoder is employed not as a high-fidelity generative model\, but as a constrained transformation device that introduces controlled deviations\, thereby altering the relationship to the source recordings. The resulting sounds are not reproductions of the originals\, but transformed traces that require reinterpretation in real time. Within this framework\, performance becomes a negotiation between intention\, algorithmic transformation\, and emergent sonic behavior. The performer does not retrieve memories\, but actively reshapes them\, generating new memory traces through interaction. The system adopts a human-in-the-loop approach\, in which the model acts as a mediating structure rather than an autonomous agent. The contribution of this work lies not in technical novelty\, but in proposing a practice-based perspective on how machine learning can function as a performative medium for memory transformation in live coding contexts. \n  \nMohammad Sadeghi: Architectures of Alteration: Designing and Integrating Hybrid Kinetic Robotic Systems and Light Choreography in Eternal Dawn\nContemporary performance increasingly relies on kinet-ic\, robotic\, and responsive environments that demand tightly integrated engineering systems capable of acting as expressive agents. Developing such hybrid systems contributes to new modes of staging\, embodiment\, and dramaturgy\, offering artists tools for creating dynamic environments that extend beyond the limitations of human gesture alone. This paper presents the design and inte-gration of two hybrid kinetic systems developed for the performance Eternal Dawn: a ceiling-mounted robotic arm and a motor-matrix architecture controlling sus-pended rectangular light frames. The robotic arm oper-ates as a supervisory and interactive entity\, shifting from analytical scanning to aggressive pendulum-like motion to intimate duet-like encounters. The motor-matrix system dynamically reconfigures the spatial geometry of the la-boratory\, synchronizing kinetic light choreography with sound and movement to construct adaptive architectural states. Synchronization with musical structures is achieved using Open Sound Control (OSC) messages ensuring accurate temporal coordination. The motors are controlled via a programmable logic controller (PLC) and a dedicated human–machine interface (HMI) manag-ing motion parameters\, sequencing\, and safety functions. The proposed systems proved effective as expressive ki-netic agents\, demonstrating a versatile platform for inte-grating robotic motion and dynamic light architectures into similarly experimental performance setting. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/innovation-showcase/
LOCATION:Hamburg University of Technology\, Building H (H 0.02)\, Am Schwarzenberg-Campus 5\, Hamburg\, 21073\, Germany
CATEGORIES:16-05,Showcase
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
END:VCALENDAR