BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//ICMC HAMBURG 2026 - ECPv6.15.20//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:http://icmc2026.ligeti-zentrum.de
X-WR-CALDESC:Events for ICMC HAMBURG 2026
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:Europe/Amsterdam
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20250330T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20251026T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20260329T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20261025T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20270328T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20271031T010000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260515T110000
DTEND;TZID=Europe/Amsterdam:20260515T110000
DTSTAMP:20260424T214404
CREATED:20260421T195205Z
LAST-MODIFIED:20260421T195205Z
UID:10000100-1778842800-1778842800@icmc2026.ligeti-zentrum.de
SUMMARY:Excursion: Departure to Lübeck
DESCRIPTION:
URL:http://icmc2026.ligeti-zentrum.de/event/excursion-departure-to-lubeck/
CATEGORIES:15-05,Excursion to Lübeck
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260515T130000
DTEND;TZID=Europe/Amsterdam:20260515T150000
DTSTAMP:20260424T214404
CREATED:20260421T172624Z
LAST-MODIFIED:20260423T090937Z
UID:10000178-1778850000-1778857200@icmc2026.ligeti-zentrum.de
SUMMARY:Piece & Paper Session
DESCRIPTION:Four pieces & papers will be presented: \n  \nLinear A\nChristopher Trapani \nRituals of Forgetting and Remembering\nJocelyn Ho et al. \nHe（龢）\nXiangbin Lin \n雨/Rain \nYuan Zhang and Xinran Zhang \n 
URL:http://icmc2026.ligeti-zentrum.de/event/piece-paper-session-lubeck/
LOCATION:Lübeck University of Music: Kammermusiksaal\, Große Petersgrube 21\, Lübeck\, 23552\, Germany
CATEGORIES:15-05,Excursion to Lübeck,Piece & Paper,Session
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260515T153000
DTEND;TZID=Europe/Amsterdam:20260515T163000
DTSTAMP:20260424T214404
CREATED:20260421T144302Z
LAST-MODIFIED:20260421T144302Z
UID:10000173-1778859000-1778862600@icmc2026.ligeti-zentrum.de
SUMMARY:Keynote | James Andy Moorer: History of Computer Music from Mathews to "Man in the Mangroves"
DESCRIPTION:The origins of computer music promised unlimited freedom for composers to make music using sounds that no acoustic instrument could make. This freedom comes with a price. Composing a computer-synthesized piece involves an extra step. You do not just choose the instruments in your ensemble\, but you must also build the orchestra. Over the last 70 years\, we have evolved a wide range of techniques for music synthesis. We have reduced the burden of building the orchestra creation but have not eliminated it.  \nThe creation of “The Man in the Mangroves Counts to Sleep” illustrates this process. About half of the work went to building the computer-based tools for the specialized form of voice synthesis needed for orchestration of the poem. After all these years\, it is clear that there is more to be done to reduce the effort required ofthe composer in bringing the sounds from our imagination into reality. This talk will illustrate some of the problems that had to be solved in the realization of the piece.  \n  \nJames Andy Moorer\nJames A. Moorer is an internationally-known figure in digital audio and computer music\, with over 40 technical publications and many patents to his credit. In 1991\, he won the Audio Engineering Society Silver award for lifetime achievement. \nIn 1996\, he won an Emmy Award for Technical Achievement with his partners\, Robert J. Doris and Mary C. Sauer for Sonic Solutions/NoNOISE for Noise Reduction on Television Broadcast Sound Tracks. \nIn 1999\, he won an Academy of Motion Picture Arts and Sciences Scientific and Engineering Award (oscar) – for his pioneering work in the design of digital signal processing and its application to audio editing for film. \nHe is currently retired. \nFrom 1987-2001\, Dr. Moorer has served as Senior Vice President for Advanced Development at Sonic Solutions\, and is responsible for the NoNOISE package for restoration of vintage recordings. \nFrom 1986 to 1987\, Dr. Moorer consulted for NeXT\, Inc.\, on DSP software architecture for audio processing. \nFrom 1985 to 1986\, he was the chief technical officer at the Lucasfilm Droid Works. \nFrom 1980 to 1985\, he was the digital audio project leader at Lucasfilm\, Ltd. From 1977-1980\, he was the Reponsable Scientifique (technical advisor) at IRCAM in Paris. \nFrom 1975 to 1977\, he was a founder and co-director of the Stanford Computer Center for Research in Music and Acoustics. \nFrom 1968 to 1972\, he was a professional programmer at the Stanford Artificial Intelligence Laboratory. \nDr. Moorer holds a PhD in Computer Science from Stanford University\, granted in 1975. Prior to that\, Dr. Moorer earned an S.B. in Applied Mathematics from MIT in 1968\, and an S.B. in Electrical Engineering from MIT in 1967. \n  \n  
URL:http://icmc2026.ligeti-zentrum.de/event/keynote-james-andy-moorer-history-of-computer-music-from-mathews-to-man-in-the-mangroves/
LOCATION:Lübeck University of Music: Kammermusiksaal\, Große Petersgrube 21\, Lübeck\, 23552\, Germany
CATEGORIES:15-05,Excursion to Lübeck,Keynote
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260515T170000
DTEND;TZID=Europe/Amsterdam:20260515T190000
DTSTAMP:20260424T214404
CREATED:20260421T113808Z
LAST-MODIFIED:20260423T185415Z
UID:10000161-1778864400-1778871600@icmc2026.ligeti-zentrum.de
SUMMARY:Workshop | Shelly Knotts\, Daniel Ratliff and Lucy Whalley: Sonification and the Space In-Between: Bridging Scientific Inquiry and Musical Practice
DESCRIPTION:Sonification has a long history within computer and electronic music\, with composers—such as Clarence Barlow\, Alvin Lucier and Laurie Spiegel—using data as a compositional tool. At the same time\, sonification is an established tool in scientific inquiry\, where it has been used for purposes ranging from the sonification of astronomical phenomena for public outreach\, to the analysis and communication of long-term environmental data. This workshop explores the rich interdisciplinary space that lies between computer music composition and scientific inquiry\, focusing on sonification as a shared methodology\, bridging between disciplines\, rather than a discipline-specific technique. Participants will be introduced to interdisciplinary working practices that support the identification of shared concerns and the development of common understanding when designing sonifications\, as well as software tools and computational workflows that enable collaborative work. The workshop will be led by researchers working together as part of the interdisciplinary project Sonic Intangibles\, and within their own domains of Sound Art\, Live Coding\, Mathematics and Computational Physics. By emphasising interoperability and participation\, this workshop aims to explore how sonification can generate increased discourse between\, and mutual benefit for\, musical and scientific communities. \n  \nRequirements\nThe workshop is accessible to a broad audience in computer music\, with the only pre-requisite being some familiarity with any programming language. Attendees will be asked to install a local version of Supercollider ahead of the workshop\, and will need to bring a laptop and headphones for the final part of the workshop. \n  \nWorkshop registration\nPlease register via Pretix in order to participate in the workshop. There are no additional costs.  \n  \nAbout the workshop facilitators\nShelly Knotts produces live-coded and network music performances and projects which explore aspects of code\, data and collaboration in improvisation\, and has performed and presented her work at numerous events worldwide. Based in Newcastle Upon Tyne\, UK\, she performs internationally\, collaborating with computers and other humans. She is currently a Post-doctoral Research Fellow on the Sonic Intangibles project at Northumbria University.\nIn 2021-23 she was an Artist-in-Residence on the Heritage Lottery funded Seascapes project\, working with communities in Sunderland. In 2016-2021 she worked on research projects around the use of AI\, data and networks in improvisation and composition and related social themes at Durham University (UK)\, Monash University (AUS)\, Newcastle University (UK) and McMaster University (CA). She completed a PhD in Live Computer Music at Durham University in 2018.\nIn 2017 she was a winner of BBC Radiophonic Workshop and PRSF ‘The Oram Awards’ for innovation in sound and music.\nShe has taught numerous creative coding workshops at conferences\, festivals\, universities and cultural institutions worldwide\, and runs Creative Code Club — an informal and inclusive gathering of people interested in the practices and cultures of creative coding — at The NewBridge Project\, an artist-led space in Newcastle Upon Tyne. \n  \nDaniel Ratliff is an associate lecturer at Northumbria university (2020-present)\, specialising in interdisciplinary approaches to waves across physics and beyond. He works internationally to connect concepts across mathematics\, oceanography\, statistical physics and space science to advance our understanding of these topics and bridge the disciplinary gaps that often separate these fields.\nHe has delivered several public engagement events (including Newcastle’s Pint Of Science in 2023 and 2025 and public lectures at Newcastle’s Lit and Phil)\, has organised and delivered two 14-week research project for KS3 students at a local school via the ORBYTS initiative and both designed and delivered a number of targeted Researcher skills workshops for the PGR student cohorts at Northumbria University. \n  \nLucy Whalley is an Associate Professor in Physics at Northumbria University whose work spans quantum chemistry\, materials modelling and interdisciplinary scientific and creative practices. Her research is centred around the use of computational techniques and high performance computing to investigate the atomic-scale behaviour of materials\, particularly in contexts relevant to renewable energy.\nLucy is co-lead of the Sonic Intangibles project which explores how interdisciplinary practice across computer music\, ethnography and the physical Sciences can enable sonification as a tool for research and communication. She is also a member of SDF\, an experimental electronic musiccollective whose work has been released through Lost Map Records and performed at venues including Iklektic (London) and Summerhall (Edinburgh).\nLucy is a Software Sustainability Institute Fellow and Associate Editor at the Journal of Open Source Software\, reflecting her interest and advocacy for open and and sustainable software development. She currently teaches programming\, quantum mechanics and computational Physics at university Level. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/workshop-shelly-knotts-et-al-sonification-and-the-space-in-between-bridging-scientific-inquiry-and-musical-practice/
LOCATION:Lübeck University of Music: Holstentorhalle\, Chorsaal\, Wallstraße 2\, Lübeck\, 23554\, Germany
CATEGORIES:15-05,Excursion to Lübeck,Workshop
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260515T170000
DTEND;TZID=Europe/Amsterdam:20260515T190000
DTSTAMP:20260424T214404
CREATED:20260421T121755Z
LAST-MODIFIED:20260423T185343Z
UID:10000166-1778864400-1778871600@icmc2026.ligeti-zentrum.de
SUMMARY:Workshop | Pierre Alexandre Tremblay\, Nicola Leonard Hein and Gilbert Nouno: Dialogues with Improvising Machines: an Embodied Cross-Testing Workshop on Musical Agents
DESCRIPTION:This 2.5-hour workshop examines four musical agent systems through a structured process of presentation\, performance\, collective listening\, and critical discussion. It addresses a central contemporary question in computer music: how the design of musical agents encodes particular modes of listening\, interaction\, and agency\, and how these design choices shape musical practice.\nThe workshop adopts a commented comparative-phenomenological methodology. It showcases four musical agent systems selected from an open call\, which are presented in turn\, each offering a distinct approach to interactive and improvisational musical behavior. These systems have been developed using different programming languages\, creative coding environments\, and technical frameworks\, reflecting the diversity of current practices in the field. Their comparison will therefore highlight not only aesthetic and compositional differences\, but also the ways in which specific tools and technical architectures condition musical affordances.\nFor each system\, the creator will quickly introduce its technical design\, musical aims\, and underlying assumptions. This will be followed by a short performance by the system’s author\, two exploratory performances by other workshop participants\, and a discussion with participants and attendees. The emphasis throughout will be on the system’s interactional qualities\, its encoded listening strategies\, and the forms of musical agency that emerge in performance.\nThe final part of the workshop will be devoted to a comparative discussion of the four systems\, with the aim of articulating shared vocabulary\, critical perspectives\, and possible evaluation criteria for improvising and interactive musical agents. In doing so\, the workshop seeks to contribute to ongoing discourse on the role of bias\, intention\, and technological mediation in musical system design. This closing conversation will reflect on the different forms of musical interaction that emerged\, and consider how we might develop shared vocabulary\, aims\, and evaluation criteria for improvising and interactive musical agents.\nThe workshop is intended for composers\, improvisers\, performers\, creative coders\, and researchers interested in interactive systems\, machine listening\, and AI in music. It particularly welcomes participants who wish to think critically about what it means to compose with\, perform with\, or delegate agency to computer-based musical systems.\nBy creating a space for presentation\, experimentation\, and peer critique\, the workshop aims to deepen discussion around the role of encoded listening in musical agent systems and the musical practices that emerge from it.\nThe workshop is led by Pierre Alexandre Tremblay (Conservatorio della Svizzera italiana)\, Nicola Leonard Hein (University of Music Lübeck)\, and Gilbert Nouno (Haute école de musique de Genève)\, whose artistic and research practices span composition\, improvisation\, creative coding\, and interactive systems. \n  \nRequirements\nListen and Discuss. \n  \nWorkshop registration\nPlease register via Pretix in order to participate in the workshop. There are no additional costs.  \n  \nAbout the workshop facilitators\nPierre Alexandre Tremblay is a composer and performer on bass guitar and electronic devices\, in solo and Group settings\, between electroacoustic music\, contemporary jazz\, mixed music and improvised music. He also worked in popular music and practices critical creative coding. His music is available on empreintes DIGITALes.\nHe studied composition with Michel Tétreault\, Marcelle Deschênes\, and Jonty Harrison; bass guitar with JeanGuy Larin\, Sylvain Bolduc\, and Michel Donato; Analysis with Michel Longtin\, and Stéphane Roy; and studio technique with Francis Dhomont\, Robert Normandeau\, and Jean Piché.\nPierre Alexandre was Professor of Composition and Improvisation at the University of Huddersfield (England\, UK) from 2005 to ’24\, where he anchored the ERCsupported Fluid Corpus Manipulation project. In September 2024\, he joined the team of the Conservatorio della Svizzera italiana as a research professor in composition.\nHe likes spending time with his family\, reading prose\, and going on long walks. \nNicola Leonard Hein is a sound artist\, guitarist\, creative technologist\, composer and researcher in music and philosophy. He is a professor of Sound Arts & Creative Music Technology at the music university of Lübeck.\nHis work is driven by the interaction of sound and space\, light\, movement\, thought and the becoming of embodied and intermedial intelligence in aesthetic systems\, community and technology. In his artistic work\, he uses physical and electronic extension of the electric guitar\, sound installations\, cybernetic human-machine interaction with A.I. interactive music systems\, Augmented Reality\, telematic real-time art\, ambisonic sound projection\, instrument building\, conceptual compositions\, intermedia works (with video art\, light\, dance\, literature) and much more.\nHis research revolves around questions of philosophy of music\, epistemology\, aesthetics\, media theory\, critical improvisation studies and cybernetics. It follows questions of the creation of identity and sense in interactions between humans and technology\, and investigates the philosophical implications of musical and intermedial practices. \nGilbert Nouno composes\, codes\, improvises\, and teaches at the Haute école de musique de Genève\, where he leads the CIMME (Interdisciplinary Center for Experimental Music and Media). He likes to blend sound with image\, and technology with the gestures of performance. Moving between tangible matter and dematerialized material\, his hybrid works invite audiences to cross the ever-shifting boundary between human and machine. With artificial intelligence\, he explores new playgrounds to expand improvisation\, rethink performance\, and imagine augmented artistic practices.\nThis approach is embodied in works such as SINE (2024)\, a performative multimedia piece in which gesture drives sound and video within an immersive\, AI-based audiovisual environment. A laureate of the Villa Kujoyama in Kyoto and the Académie de France à Rome Villa Medici\, he has shared stages\, studios\, and creative adventures with composers Pierre Boulez\, Jonathan Harvey\, Olga Neuwirth\, saxophonist Steve Coleman\, flutist Magic Malik\, choreographer Léo Lérus\, scenographer Jean Kalman\, and stage director Pierre Audi. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/workshop-pierre-alexandre-tremblay-et-al-dialogues-with-improvising-machines-an-embodied-cross-testing-workshop-on-musical-agents/
LOCATION:Lübeck University of Music: Ehemalige Bundesbank\, Schalterhalle\, Holstentorplatz 2\, Lübeck\, 23552\, Germany
CATEGORIES:15-05,Excursion to Lübeck,Workshop
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260515T200000
DTEND;TZID=Europe/Amsterdam:20260515T220000
DTSTAMP:20260424T214404
CREATED:20260421T171512Z
LAST-MODIFIED:20260422T132957Z
UID:10000177-1778875200-1778882400@icmc2026.ligeti-zentrum.de
SUMMARY:Evening Concert 5B (Lübeck)
DESCRIPTION:Program Overview\nImprovising Machine #7325: Inside My Trumpet\, Again\nJeff Kaiser \nThe Letter\nMinho Kang \nMoloch whose mind is pure machinery!\nEric Lyon \nTidal Unit for Sonic Activities\nIlia Viazov and Nicola Leonard Hein \nRhythmic Traces | Twisted Electronics\nNicola Leonard Hein \nFound Violin x Aromantic Hobby \nDong Zhou \nTokens & Strings: an improvisation between an electric guitarist and a local LLM\nOlivier Jambois \n  \nAbout the pieces & artists\nJeff Kaiser: Improvising Machine #7325: Inside My Trumpet\, Again\n“Improvising Machine #7325: Inside My Trumpet\, Again” places the audience inside a trumpet\, exploring the instrument’s interior sonic world through an immersive human–machine improvisation system. The work is built from an extensive\, purpose-built sample library captured by placing microphones deep within the instrument. These samples document the mechanical sounds and embodied actions of trumpet performance without the instrument being played traditionally—collections of the sound of valves descending\, springs releasing\, air being compressed and released by slides\, valve caps loosening\, spit-valve gurgles\, and a range of non-tonal lip\, air\, and tongue sounds produced through the mouthpiece and leadpipe. \nTwenty-eight autonomous virtual agents (“robots”)\, authored by the composer in Max/MSP and hosted in Ableton Live\, inhabit a 360-degree ambisonic field surrounding the audience. Each agent draws from its own subset of the sample library and listens to the live trumpet performance in real time. Their behaviors fluctuate between responsive and indifferent\, generating shifting environments that range from highly chaotic to unexpectedly calm. As a result\, the improvising performer becomes entangled with a machine ensemble that both reflects and subverts the human gestures\, creating a continuously changing dialogue between human and technological agents. \nAbout the artist\nJeff Kaiser is a trumpet player\, media technologist\, and scholar. Classically trained as a trumpet player and composer\, Kaiser now takes an integrative\, systemic view that involves his traditional instrument\, emergent technology (in the form of custom interactive/generative software and hardware interfaces)\, space\, and audience: all being critical and integral participants in his performances. He gains inspiration and ideas from the rich history of experimental improvisation and composition\, as well as cognitive science\, and the vast timbral and formal affordances provided by combining traditional instruments with new and repurposed technologies. The roots of his music are firmly in the experimental traditions within jazz\, improvisation\, and Western art music practices. Kaiser is currently Associate Professor of Music Technology and Composition at the University of Central Missouri. \nMore information at https://jeffkaiser.com/ \n  \nMinhi Kang: The Letter\nThe Letter is a work of consolation created using an FFT Channel Vocoder with Additive Synthesizer. \nHistorically\, the vocoder was developed during wartime to enable communication among allies. It reduces wideband speech to a narrower band for transmission and then reconstructs it at the receiver. In short\, a vocoder sends important words over distance and makes their faint traces audible again.\nAs a composer\, creating music is much the same. I keep listening to people and the world\, their voices. Then\, I compress\, interpret\, and reassemble those words in my own terms and offer them back as a piece.\nUnlike the vocoder’s original purpose\, in a time when war is no longer shocking news\, I wanted to use this technology to carry comfort. The lyrics come from a poem I wrote during my military service to endure a hard period (not in combat). This piece does not present a political agenda; it is a letter to anyone facing painful circumstances\, on any side\, in any degree. \nTechnically\, I aimed to design a vocoder with greater precision than a conventional channel vocoder. Instead of using bandpass filters\, I applied Fast Fourier Transform (FFT) analysis to collect more detailed and accurate amplitude information\, which allowed clearer rendering of vowel formants. This approach led to the creation of a Max for Live (M4L) FFT Channel Vocoder patch.\nI also developed an Additive Synthesizer M4L patch capable of producing a wide spectrum of sounds\, from pure sine waves to noise. When combined with the vocoder\, this synthesizer allows the clarity and harmonicity of speech to change according to the lyrics. Since the text relates to the transformation of light\, I used this Additive Synthesizer to achieve a tone painting that reflects those luminous changes. \nAbout the artist\nMinho Kang is a Korea-born composer and computer musician. His artistic interests\, which began in popular music and moved into contemporary music\, have expanded into electronic music at the intersection of technology and art. Drawing on introspective reflection and close observation of the world\, he brings diverse imaginings into his works.\nHis music has been presented at conferences and festivals including SEAMUS\, ICMC\, and the TurnUp Multimedia Festival. He completed his bachelor’s degree at Indiana University\, where he studied composition with Jeremy Podgursky\, Aaron Travers\, P. Q. Phan\, David Dzubay\, and Don Freund\, and electronic music with John Gibson and Chi Wang at the Center for Electronic and Computer Music. \n  \nEric Lyon: Moloch whose mind is pure machinery!\nAllen Ginsburg’s poem Howl was published in 1956\, the same year as the Dartmouth Summer Research Project on Artificial Intelligence. The two events portend seemingly incompatible futures that nonetheless are both with us now. A bursting forth of cultural chaos in an “armed madhouse” and the technocratic reduction of intelligence to code. Ginsburg’s poem’s ritualistic and repetitive rant about Moloch inspired this performance\, a tone poem that derives its sounds from two main sources – AI-generated music and the OB-Xd virtual analog synthesizer VST plugin manipulated using the Slewable Utility for Random Parameters (SLURP) designed by the composer. The performance interface consists of a Korg nanoKONTROL2 unit and the Google MediaPipe face landmarker. \nAbout the artist\nEric Lyon is a composer and audio researcher focused on high-density loudspeaker arrays\, dynamic timbres\, virtual drum machines\, and performer-computer interactions. His audio signal processing software includes “FFTease” and “LyonPotpourri.” He has authored two computer music books\, “Designing Audio Objects for Max/MSP and Pd\,” a guidebook for writing audio DSP code for live performance\, and “Automated Sound Design\,” a book that presents technical processes for implementing oracular synthesis and processing of sound across a wide domain of audio applications. He has written extensively about the possibilities of multichannel spatial audio. In 2016-17\, Lyon was guest editor for the Computer Music Journal on Volume 40(4) and 41(1) covering various aspects of High-Density Loudspeaker Arrays (HDLAs). \nIn 2015-16\, Lyon architected both the Spatial Music Workshop and Cube Fest at Virginia Tech to support the work of other artists working with HDLAs. In 2025 he co-created the Spatial Audio Tidepool to provide technical instruction for creative uses of high-density loudspeaker arrays. Lyon’s compositional work has been recognized with a ZKM Giga-Hertz prize\, MUSLAB award\, the League ISCM World Music Days competition\, and a Guggenheim Fellowship. Lyon teaches in the School of Performing Arts at Virginia Tech\, and is a Faculty Fellow at the Institute for Creativity\, Arts\, and Technology. \n  \nIlia Viazov and Nicola Leonard Hein: Tidal Unit for Sonic Activities\nPerformance-presentation of tusa (Tidal Unit for Sonic Activities). Tusa is a framework for Tidal Cycles live-coding environment that binds together different parts of the application in one Bash executable. It is an attempt to accomplish Tidal Cycles\, expanding it to a software DMI. It seeks to fulfill essential needs during performance with the environment\, keeping the setup very minimal yet sturdy\, while remaining modular and extendable. The framework allows the user access to the interpreter\, text editor\, reference window and server during live-coding practices.\nThe performance is aimed on live-coding improvisation with machine learning tools using spatialisation synthesis techniques. \nAbout the artists\nIlia Viazov (born in 1999 in Voronezh\, Russia) is a composer and sound artist working at the intersection of electronic music\, performance\, self-built instruments\, machine learning\, and software development. His personal and collaborative works have been presented at and supported by Ars Electronica Festival\, platformB Stuttgart\, and Darmstädter Ferienkurse. He is developing the framework tusa for Tidal Cycles live-coding environment\, a terminal implementation that allows the user run it locally\, fully interact with all parts of the environment and extend it. \nDr. Nicola Leonard Hein \n\nNicola Leonard Hein: Rhythmic Traces | Twisted Electronics\nThe piece Rhythmic Traces|Twisted Electronics deals with the question of how the integration of the body and skin resistance into the circuit of an analog synthesizer(Buchla Music Easel) and the connection with a machine learning-based musical agent system(SuperCollider) can change the tonal and rhythmic fluidity of the instrument and develop it beyond its limits. For this piece\, Nicola Leonard Hein uses a unique circuit-bending controller that completely alters the musical reading of the 1970s Buchla Music Easel. Furthermore\, he uses a multi-effect unit programmed in SC and realized with a Bela Microcomputer. Hein’s musical agent learns to interact musically\, creating the music in real time together with Hein on the synthesizer and developing the interaction between a human and a machine musical voice. The systemic economy of movement and the interaction with the AI musical agent create polyphonic rhythmic\, tonal\, and spatial structures. The piece focuses on the emergent Dances of Agency (Pickering). \nAbout the artist\nDr. Nicola L. Hein is a sound artist\, guitarist\, composer\, researcher\, programmer\, and professor of Sound Arts and Creative Music Technology at the University of Music Lübeck.\nHe works with A.I.-assisted human-machine interaction\, postdigital lutherie\, intermedia\, sound installations\, augmented reality\, network music\,and spatial audio. His works have been realised in more than 30 countries\, at festivals such as MaerzMusik Festival\, Sonica Festival\, Experimental Intermedia etc. \n  \nDong Zhou: Found Violin x Aromantic Hobby \nFound Violin is an improvisation system that treats the violin as just one of many sound objects. Since late 2024\, Dong Zhou has started to develop Aromantic Hobby\, a series of strap-on midi controllers. After a few prototypes\, the current controller features a bunny-shaped appearance and wirelessly transmits kinetic data from the wearer to control a chaotic synthesizer. With Found Violin played with the upper body and Aromantic Hobby on on lower body\, the musician plays a duo with themselves. \nAbout the artist\nDong Zhou is a composer-performer based in Hamburg. Zhou gained a B.A. in music engineering at the Shanghai Conservatory and an M.A. in multimedia composition at the Hamburg University of Music and Drama. Zhou won several prizes\, including the first prize in the 2018 ICMC Hacker-N- Makerthon\, the finalist of the 2019 Deutscher Musikwettbewerb\, the Nota-n-ear Award 2022\, and the shortlist of the 2025 Giga-Herz Pop Experimental Production Award. Zhou had works included in the ‘Sound of World’ Microsoft ringtones collection and was commissioned by festivals and institutions such as the Shanghai International Art Festival\, ZKM Karlsruhe\, Stimme X Festival\, etc. Zhou is currently a doctoral candidate in ICAM of Leuphana University. \n  \nOlivier Jambois : Tokens & Strings: an improvisation between an electric guitarist and a local LLM\nThis performance explores real-time co-creation between a human performer and a machine\, specifically investigating the improvisational capabilities of Large Language Models (LLMs) within a musical context. The project originates from an inquiry into the potential of using established LLM architectures —notably the one behind ChatGPT— as responsive improvisational partners. \nA primary challenge in this research is the nature of the LLM: as these models are designed for symbolic processing rather than direct audio generation\, the system must bridge the gap between acoustic signals and semantic analysis. An architecture was developed where the electric guitar’s audio is captured and processed to extract high-level audio descriptors. These descriptors are then sent to the LLM\, which analyzes the performer’s intent and generates a symbolic rhythmic response. This response is mapped to a drum sequencer controlling kick\, snare\, and hi-hat patterns.\nTo address the inherent risks of cloud-based APIs in a live performance environment—such as latency and connectivity instability—this work utilizes a local deployment. While local models often feature a smaller parameter count\, the system has been optimized through careful prompt design and constraint-based logic. This ensures a meaningful rhythmic dialogue while minimizing inference time\, achieving a critical trade-off between algorithmic complexity and real-time musical reactivity. \nIn this performance\, the generative drumming output is routed through a RAVE (Real-time Audio Variational auto-Encoder) module\, developed by IRCAM. By applying neural re-synthesis via a percussion pre-trained model\, the system transforms these source samples into complex\, evolving textures\, moving beyond static playback toward a more sophisticated timbral exploration. Throughout the improvisation\, the guitar signal is processed through custom-designed Pure Data patches\, creating a personal sonic language that oscillates between raw strings and highly transformed textures\, seeking a constant state of flux between contrast and blending with the machine-generated environment. \nAbout the artist\nOlivier Jambois is a guitarist\, composer\, and researcher working at the intersection of acoustic tradition\, analog electronics\, and digital innovation. He holds a PhD in Condensed Matter Physics and a Master’s degree in jazz and modern music\, a dual background that defines his analytical yet avant-garde approach to music.\nAfter winning the Jazz à Vienne national competition in 2012\, his debut Les Composantes Invisibles earned “Revelation” honors from Jazz Magazine. Since then\, he has contributed to over 16 albums on labels like Naïve and Underpool\, performing at major festivals including Nancy Jazz Pulsations and the Barcelona Jazz Festival.\nHis recent work bridges historical and cutting-edge technologies. A 2023 grant from the Generalitat de Catalunya supported his research into DIY magnetic tape echoes\, resulting in the solo album Self Tape Echoes (2024). In contrast\, his project Cosmogonies utilizes Pure Data on a Raspberry Pi for purely digital expression. He integrates these two worlds in live improvisations at festivals like LEM and MIRA. His 2025 release\, Eclosió\, featuring drummer Jim Black\, further establishes his influence in contemporary improvisation.\nAs a Professor and Researcher at ENTI-UB\, Jambois focuses on AI and generative systems. By synthesizing scientific methodology with contemporary creation\, he continues to push the boundaries of the electric guitar through custom-built hardware and computational tools. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/evening-concert-5b-lubeck/
LOCATION:Lübeck University of Music: Großer Saal\, Große Petersgrube 21\, Lübeck\, 23552\, Germany
CATEGORIES:15-05,Concert,Excursion to Lübeck,Music
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260515T223000
DTEND;TZID=Europe/Amsterdam:20260515T223000
DTSTAMP:20260424T214404
CREATED:20260421T195201Z
LAST-MODIFIED:20260423T123822Z
UID:10000101-1778884200-1778884200@icmc2026.ligeti-zentrum.de
SUMMARY:Excursion: Departure (Return) to Hamburg
DESCRIPTION:
URL:http://icmc2026.ligeti-zentrum.de/event/excursion-departure-return-to-hamburg/
CATEGORIES:15-05,Excursion to Lübeck
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
END:VCALENDAR