Innovation Showcase
Takayuki Rai and Haruka Hirayama: Motion Capture: Mocopi and Sound Interaction in Max
This paper presents a Local Space–based motion Analysis framework using Sony’s Mocopi motion capture system, implemented in the Max environment for real-time Audio interaction. The framework is designed for live Performance and interactive demonstration. In previous work, joint position data transmitted from Mocopi were converted into World Space coordinates and applied to audio and visual interaction. While this approach enabled various interaction designs, it revealed a fundamental limitation: identical body movements produced different coordinate values depending on the performer’s facing direction. This weakened the intuitive correspondence between bodily sensation and control data, particularly in performance and improvisational contexts. To address this issue, the present study introduces a Local Space representation defined relative to the performer’s body orientation. Joint positions are transformed from World Space into a forward-facing, body-relative coordinate frame that moves and rotates with the performer. This enables consistent detection of body movements regardless of orientation on stage, preserving the perceptual relationship between physical action and control data. Based on this framework, several Max external objects were developed to estimate body orientation, convert Joint positions into Local Space, and compute motion Features such as movement distance, direction, and angular change. Application examples demonstrate that movement-based Audio control becomes more stable and intuitive in Local Space. The system was evaluated through Poster + Demo presentations with a Mocopi-equipped performer, highlighting its suitability for interactive performance and artistic applications.
Walker Smith: The Magic Alchemical Drum Set: a transducer-driven light-up drum set using timbres and scales derived from sonified chemical element spectra
The Magic Alchemical Drum Set is an interactive audiovisual instrument that integrates three lines of preliminary research: (1) the construction of element-specific Timbres using sonified spectral data and perceptually motivated transformations, (2) the design of unequal-tempered microtonal scales derived from elemental spectra and implemented on a LumaTone keyboard, and (3) a transducer-driven drum set that physically couples these sounds to acoustic percussion instruments and synchronized lighting. Together, these components form a system that transforms static spectroscopic data into a playable, performative instrument emphasizing tactile interaction and audiovisual correspondence. The paper provides a Brief overview of related work, outlines the design considerations underlying the scales and timbres, and documents the construction and use of the drum set in both compositional and interactive installation contexts, including Feedback from participants. A detailed demo Video is provided along with all necessary code. Conclusions and future work in the areas of scale and timbre design, as well as interactive audiovisual instrument design, are presented.
Matthias Jung: Incisions: Tangible Latent Space Exploration with Three Sound Balls
This system suggests an interactive, tactile approach to exploring machine learning models collaboratively in real-time. The system design is a work-in-progress and at this stage connects three handheld, spherical devices (sound balls) to three machine learning models. The Sound balls are equipped with pressure sensors and gyroscopes, that are sending readings via an ESP32 via OSC over WiFi to a Max/MSP patch that is hosting the model playback. The patch uses different open-source and self-trained models that are then mixed into a master playback audible via headphones by the three sound ball players, who will explore the models via a latent dimension Setup collaboratively.
Kieran McAuliffe, Ornella Tortorici and Ali Elnwegy: Robotics for Digital Artists: OSC-ROS Integration
The Robot Operating System (ROS) has become a de facto standard for robot software development, offering powerful tools for real-time communication, control, and simulation. However, its complexity presents significant barriers for multimedia artists and creative practitioners. In contrast, the accessible Open Sound Control (OSC) is widely adopted in the creative coding community and supported by numerous artistic software environments. This demo showcases a prototype OSC–ROS bridge designed to lower the entry barrier for artists working with robotic systems. It receives messages from the user in the form of OSC, and converts them into joint trajectories which it sends over ROS. Participants in the demo can interact with two setups: controlling a custom-built painting robot and sonifying the motion of an industrial robot arm. These applications highlight how robotic systems can function both as expressive actuators and as performative interfaces.
Charles Hutchins and Shelly Knotts: SCMoo: A Live Codeable VR Environment
After the loss of Mozilla Hubs and the end of most Metaverse hype, we present a retro, text- and sound- based VR platform for live coding interactive music in SuperCollider, which is accessible, enjoyable and lower carbon than polygon-based systems. In the 1990s text-based MUDs (Multi-User Dungeons) and MOOs (MUDs Object Oriented) were inhabited by hundreds of users. The communities in these spaces could design any avatars they wanted, which could perform any actions they could describe (limited only by imagination and language) as the medium itself was text. MOOs provided all users with the possibility to add objects, rooms, actions, behaviours and other features to the environment through object-oriented programming. The collaboratively built VR environment was live coded by the users who built features through iterative design within the shared platform. This demo presents SCMoo, which is a reimplementation of a LambdaMOO-like system, written in the musical programming language SuperCollider. SCMoo is a multi-user platform for sound making and role play.
Juliana Lüer, Christoph Salje and Prof. Dr.-Ing. Thorsten A. Kern: Controlling Musical Parameters in Neurorehabilitation witha Haptic Finger Tracker”
Patients in neurorehabilitation often face not only severe motor impairments, but also associated psychological problems. Music therapy can make a valuable supplement to purely verbal psychotherapy, but its use is limited as patients often cannot play conventional musical Instruments due to motor skill limitations. This can hinder the psychological recovery, where musical expression is essential.
To address this, the Haptic Finger Tracker was developed, emerging from a project at Institute XXX, a collaborative initiative where researchers and artists work on interdisciplinary projects. This paper describes a prototype that transforms minimal finger movements into sound, accompanied by corresponding haptic sensations. Technically, the device uses flex sensors and an inertial measurement unit (IMU) to capture a range of small-scale finger movements. Using the Open Sound Control (OSC) protocol, these captured gestures are then translated to Control musical elements such as pitch, volume and arpeggios. Simultaneously, a vibrotactile actuator provides haptic feedback aimed at enhancing the user’s sense of Engagement and embodiment. The resulting prototype is a portable, user-friendly device that empowers patients by providing a creative outlet and fostering a sense of self-efficacy. This work establishes a technical foundation for future neurorehabilitative tools that utilize multisensory feedback to improve patient outcomes.
Luca Morino, Nicola Conci and Fabio Cifariello Ciardi: B3-H4RSH: A Noise-based Multiplayer Game for Mobile Music-Making
Over the past two decades, artists and composers have increasingly explored mobile phones — ubiquitous and accessible devices — as instruments for music Performance and, in particular, as interfaces for audience participation and collaborative music-making. This paper presents B3-H4RSH, an interactive mobile music system. Implemented as a web application for smartphone browsers on a co-located network, the system interconnects participants’ devices, employing competitive multiplayer mechanics to structure interdependencies among players and shape the music-making act within a noise-music paradigm. By influencing and responding to one another’s actions, participants collectively diffuse sound throughout the space from their smartphones while competing to achieve the “harshest” sonic outcome – and win.
Riccardo Mazza: Translating Sonic Memories into Latent Performable Spacesfor Live Coding
This paper presents a live coding performance system that reconfigures autobiographical sound materials through real-time interaction with a machine learning process. Rather than treating sonic memories as fixed archival objects, the system approaches memory as a dynamic and unstable process, continuously reshaped during performance. Recorded sound fragments are analyzed using FluCoMa descriptors and organized within a navigable two-dimensional space. A lightweight autoencoder is employed not as a high-fidelity generative model, but as a constrained transformation device that introduces controlled deviations, thereby altering the relationship to the source recordings. The resulting sounds are not reproductions of the originals, but transformed traces that require reinterpretation in real time. Within this framework, performance becomes a negotiation between intention, algorithmic transformation, and emergent sonic behavior. The performer does not retrieve memories, but actively reshapes them, generating new memory traces through interaction. The system adopts a human-in-the-loop approach, in which the model acts as a mediating structure rather than an autonomous agent. The contribution of this work lies not in technical novelty, but in proposing a practice-based perspective on how machine learning can function as a performative medium for memory transformation in live coding contexts.
Mohammad Sadeghi: Architectures of Alteration: Designing and Integrating Hybrid Kinetic Robotic Systems and Light Choreography in Eternal Dawn
Contemporary performance increasingly relies on kinet-ic, robotic, and responsive environments that demand tightly integrated engineering systems capable of acting as expressive agents. Developing such hybrid systems contributes to new modes of staging, embodiment, and dramaturgy, offering artists tools for creating dynamic environments that extend beyond the limitations of human gesture alone. This paper presents the design and inte-gration of two hybrid kinetic systems developed for the performance Eternal Dawn: a ceiling-mounted robotic arm and a motor-matrix architecture controlling sus-pended rectangular light frames. The robotic arm oper-ates as a supervisory and interactive entity, shifting from analytical scanning to aggressive pendulum-like motion to intimate duet-like encounters. The motor-matrix system dynamically reconfigures the spatial geometry of the la-boratory, synchronizing kinetic light choreography with sound and movement to construct adaptive architectural states. Synchronization with musical structures is achieved using Open Sound Control (OSC) messages ensuring accurate temporal coordination. The motors are controlled via a programmable logic controller (PLC) and a dedicated human–machine interface (HMI) manag-ing motion parameters, sequencing, and safety functions. The proposed systems proved effective as expressive ki-netic agents, demonstrating a versatile platform for inte-grating robotic motion and dynamic light architectures into similarly experimental performance setting.
