BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//ICMC HAMBURG 2026 - ECPv6.15.20//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:ICMC HAMBURG 2026
X-ORIGINAL-URL:http://icmc2026.ligeti-zentrum.de
X-WR-CALDESC:Events for ICMC HAMBURG 2026
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:Europe/Amsterdam
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20250330T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20251026T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20260329T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20261025T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20270328T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20271031T010000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260510T193000
DTEND;TZID=Europe/Amsterdam:20260510T220000
DTSTAMP:20260423T153918
CREATED:20260421T081038Z
LAST-MODIFIED:20260423T112509Z
UID:10000070-1778441400-1778450400@icmc2026.ligeti-zentrum.de
SUMMARY:Opening Concert
DESCRIPTION:Program Overview\nIntroduction \nAlexander Schubert – SCANNERS (2013)\nfor string quintet\, choreography\, and electronics (12 min) \nNicole Brady – Ricochet (World Premiere 2026)\nfor chamber orchestra (10 min) \nAnthony Paul De Ritis – Filters (2015 / 2026)\nfor alto saxophone\, string orchestra\, and live electronics (10 min) \nIntermission (25 min) \nAigerim Seilova / Steffen Lohrey – Breath Mechanics (World Premiere 2026)\nfor two soprano saxophones\, string ensemble\, and live electronics (10 min) \nClarence Barlow – Im Januar am Nil (1985)\nfor ensemble (approx. 25 min) \nShort break (10 min) \nClosing & Conference Information (15 min) \n  \nPerformers\nEnsemble Resonanz – strings\nAsya Fateyeva – saxophone\nVlatko Kučan – saxophone\nJohn Eckhardt – double bass\nDulguun Chinchuluun – piano\nLin Chen – percussion \nConductor\nFriederike Scheunchen \nFind out more about the musicians playing at ICMC HAMBURG 2026 here.  \n  \nAbout the pieces\nAlexander Schubert: SCANNERS (2013)\nfor string quintet\, choreography\, and electronics \nThe piece SCANNERS copes with the physical qualities of instrumentalists in electro-acoustic music. It is a choreographed composition\, that takes movement as important as sound. The string ensemble turns into a performing machine. The main focus is on the movement of scanning – as well in the interaction of bow and instrument when producing sound as also in purely artificial gestures. There is no difference between musically necessary or choerographically determined mouvement. The piece can be seen as a comment on the relationship of man to digital content: the direct consequences of action can’t be explained by simple cause and effect principles any more\, the musicians become puppets or at least a part of a complex machine. At the same time the piece offers a special focus on the highly specialized genre of the string orchestra: the mechanizing emphasizes the accuracy of the interpreter and the elegance of the traditional movement\, here being staged independently from the production of sound.\nScanners belongs to a series of compositions that deal with physicality\, as there is e.g. Point Ones with interactive conductor or LaPlace Tiger with a sensor-wired drummer. \nAbout the composer\nAlexander Schubert (1979) studied bioinformatics\, multimedia composition. He’s a professor at the Musikhochschule Hamburg. Schubert’s interest explores the border between the acoustic and electronic world. In music composition\, immersive installation and staged pieces he examines the interplay between the digital and the analogue. He creates pieces that realize test settings or interaction spaces that question modes of perception and representation. Continuing topics in this field are authenticity and virtuality. The influence and framing of digital media on aesthetic views and communication is researched in a post-digital perspective. Recent research topics in his works were virtual reality\, artificial intelligence and online-mediated artworks. Schubert is a founding member of ensembles such as “Decoder“. His works have been performed more than 700 times in the last few years by numerous ensembles in over 30 countries. \n  \nNicole Brady: Ricochet (World Premiere 2026)\nfor chamber orchestra and live electronics \nRicochet explores the idea of deviation from an expected path after an initial impact\, leading to new directions. Inspired by the ricochet bowing technique\, this concept unfolds both physically and metaphorically within the ensemble.\nA responsive electronic system listens to the orchestra and generates a parallel sonic layer. Energetic passages produce scattered\, percussive textures\, while quieter material leads to dense\, sustained sound fields. The system alternates between listening and generative modes\, interacting closely with the performers.\nSubtle references to composers such as Couperin\, Ravel\, and Mozart connect historical material with contemporary sound\, while the electronics act as an additional\, autonomous voice within the ensemble. \nAbout the composer\nNicole Brady is an award-winning composer and creative director whose work spans concert music\, immersive installation\, and video game franchises including Final Fantasy\, Tekken\, and Valkyria Chronicles. Her work has been honoured by the Peabody Awards and IndieCade\, and her immersive sound album Lost Palace was released with the Royal Scottish National Orchestra. Recent commissions and performances include the Omega Ensemble\, Melbourne Symphony Orchestra\, Flinders Quartet\, and Lyris Quartet. As creative director of WLDR studio\, her immersive multisensory works have reached over 20\,000 participants across Illuminate Adelaide and Spier Light Art Festival. Nicole is a researcher at the Melbourne Conservatorium of Music and recipient of the Director’s Award for Exceptional Doctoral Research. \n  \nAnthony Paul De Ritis: Filters (2015 / 2026)\nfor alto saxophone\, string orchestra\, and live electronics \nOriginally composed for alto saxophone and electronic playback\, Filters explores the layering and spatial diffusion of sound. Recorded saxophone material creates a “second” voice\, blending with the live soloist into a unified\, resonant field.\nIn this version for saxophone\, string orchestra\, and multi-channel electronics\, the ensemble extends these layers\, producing a rich interplay between live instruments and their electronically mediated “shadows.”\nThe solo saxophone remains at the expressive center\, while the surrounding textures generate depth\, movement\, and an immersive spatial experience. \nAbout the composer\nDescribed as a “genuinely American composer” (Gramophone)\, “a bit of a visionary” (Audiophile Audition)\, and “bracingly imaginative” (The Boston Globe)\, Anthony Paul De Ritis has received performances around the world\, including at Lincoln Center\, Beijing’s Yugong Yishan\, Seoul’s KT Art Hall\, the Italian Pavilion at the 2015 World Expo in Milan\, and UNESCO headquarters in Paris. \nDe Ritis’s 2012 release “Devolution” by the GRAMMY® Award-winning Boston Modern Orchestra Project\, featuring Paul D. Miller aka DJ Spooky as soloist\, was described as a “tour de force” (Gramophone); and his “Pop Concerto” (2017) featuring Eliot Fisk was lauded as “a major issue of American music\,” (Classical CD Review) and his “Electroacoustic Music – In Memoriam: David Wessel” (2018) was cited as among the “Best of 2018” in the electronic music category (Sequenza 21). \nHe holds a Ph.D. from the University of California\, Berkeley\, and is Professor at Northeastern University\, where he co-founded the music technology program. \n  \nAigerim Seilova and Steffen Lohrey: Breath Mechanics (World Premiere 2026)\nfor two soprano saxophones\, string ensemble\, and live electronics \nThis work is a composition for two soprano saxophones\, string ensemble (4.4.4.2)\, and 8.1 live electronics\, submitted for the ICMC Special Call 1: Ensemble Resonanz . The piece serves as a spectral dialogue with Clarence Barlow’s Im Januar am Nil\, adopting his strategies of timbral fusion and hocketing but transposing them into the age of Machine Learning. The central material is derived from “ChordsNest\,” a multiphonics palette extension for MaxScore\, which is repurposed here as a training set for a neural network. The compositional core is an “AI Translation Error” in which the model was tasked with reconstructing the cylindrical bore spectra of the digital archive using the conical bore of the live saxophones and the acoustic textures of the string ensemble. \nThe resulting score is a transcription of the AI’s “hallucinations\,” where the ensemble physically replicates the digital artifacts of the style transfer process. The 8.1 electronics mediate this through a dual-role feedback loop. They function first as a synthesized “externalized memory” of the source spectra and secondly as a live inferencing engine that generates “retrospective hypotheses” by attempting to recover source-states from the acoustic performance. This architecture stages a recursive friction between the explicitly presented digital archive and the machine’s error-prone attempt to reconstruct it through physical sound. \nAbout the composers\nHamburg-based composer Aigerim Seilova integrates acoustics\, electronics\, and interactive media. A doctoral researcher at HfMT Hamburg\, her works are performed by Ensemble Modern and the Norwegian Radio Orchestra at festivals like Tanglewood and Chelsea Music Festival. Awards include the Hindemith Prize\, Leonard Bernstein Fellowship\, and Radio France Prize. She serves as Deputy Chair of the DKV Hamburg\, promoting contemporary music and interdisciplinary exchange. \nBorn in Gießen in 1987\, Steffen Lohrey studied Digital Media with a focus on sound in Darmstadt and Multimedia Composition at the Hamburg University of Music and Drama (HfMT Hamburg). His work exists at the intersection of composition\, installation\, and code. He has been involved in a wide range of projects\, including Picadero with the Haa Collective (presented at venues such as Deltebre Dansa and the Fusion Festival)\, Crawlers with Alexander Schubert (ZKM Karlsruhe)\, and Shibboleth by Aigerim Seilova at HfMT Hamburg. His work and collaborations have been featured at Blurred Edges\, the Teatre Principal Terrassa\, and the GREC Festival\, among others. In addition\, Steffen Lohrey works as an audio engineer and sound designer in Hamburg. \n  \nClarence Barlow: Im Januar am Nil (1984)\nfor 2 soprano saxophones (1st+clarinet\, bass clarinet)\, 4 violins\, 2 celli\, double bass\, piano\, percussion  \nIm Januar am Nil was written in 1981 for Ensemble Köln – the instrumentation: two soprano saxophones\, percussion (five Japanese temple bells\, a Korean gong\, a crotale\, a cymbal\, a side drum and a bass drum)\, a piano\, four violins\, two cellos and a double-bass. In 1984 the completely revised piece was premiered in Paris by Ensemble Itineraire.\nThrough the piece runs a constantly repeated melody\, increasing both in length and density – new tones appear in the expanding gaps\, first in a purely auxiliary function\, but gradually harmonically rivalling the older tones. A single note at the start develops into a flowing melody moving from transparent tonality through multitonality to a dense self-destructive atonality.\nAt first the melody is played almost inaudibly by the bass clarinet\, amplified by overtones heard as natural harmonics in the strings: the resultant timbre is phonetic\, based on a Fourier analysis of German sentences (as for instance the title itself) containing only harmonic spectra\, namely liquids\, nasals and semi-vowels. Ideally these “scored Fourier-synthesized” words should be comprehensible\, but an ensemble of seven strings can only be approximative. After a few minutes of bass clarinet and strings\, the piano enters in an explicit rendition of the melody\, developing it as described above and timbrally coloured by “hocketing” soprano saxophones. The double bass now also explicitly plays the melody without further developing it – in a “frozen” state it is contrasted with the piano part and slows down during further repetitions due to its increasing length. \nAbout the composer\nClarence Barlow (1945–2023) was a composer and pioneer of computer music\, born into the English-speaking minority of Calcutta (now Kolkata)\, India. He received his early education there\, studying piano\, music theory\, and natural sciences\, and began composing at the age of twelve. After graduating in science from the University of Calcutta in 1965\, he worked as a conductor and teacher of music theory at the Calcutta School of Music.\nIn 1968\, Barlow moved to Cologne\, where he studied composition and electronic music at the Hochschule für Musik\, alongside studies at the Institute of Sonology in Utrecht. During this period\, he began using computers as a compositional tool\, becoming one of the early figures to explore algorithmic and computer-assisted composition.\nFrom the 1980s onward\, Barlow played a central role in shaping the field of computer music. He was closely associated with the Darmstadt Summer Courses\, where he directed computer music activities for over a decade\, and was a co-founder of GIMIK (Initiative Musik und Informatik Köln). He also held numerous academic positions across Europe\, including at the Royal Conservatory in The Hague\, where he served as Professor of Composition and Sonology and later as Artistic Director of the Institute of Sonology.\nFrom 2006 until his retirement\, Barlow was Corwin Professor of Composition at the University of California\, Santa Barbara. His work is characterized by a unique synthesis of mathematical rigor\, cultural hybridity\, and innovative approaches to musical structure\, making him one of the most distinctive voices in contemporary music. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/opening-concert/
LOCATION:Elphilharmonie Hamburg\, Recital Hall\, Platz der Deutschen Einheit\, Hamburg\, 20457\, Germany
CATEGORIES:10-05,Concert,Music,Special Event
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260511T110000
DTEND;TZID=Europe/Amsterdam:20260511T173000
DTSTAMP:20260423T153918
CREATED:20260421T181209Z
LAST-MODIFIED:20260421T181209Z
UID:10000184-1778497200-1778520600@icmc2026.ligeti-zentrum.de
SUMMARY:Listening Room 2
DESCRIPTION:430-+\nAyako Sato \nFMVP!\nGuanjun Qin \nLunar Current\nChufan Zhang\, Jun Wang and Qi Liu \nSawa\nAkiko Hatakeyama \nSuwol for Tape\nSeongah Shin \nTake Me Back to Indonesia\nBoyi Bai \nVentward\nEd Osborn \nWoody\nAdrian Kleinlosen \nZen to Hearth\nYu Linke \n 
URL:http://icmc2026.ligeti-zentrum.de/event/listening-room-2-1/
LOCATION:Hamburg University of Technology\, Building A (A 0.14)\, Am Schwarzenberg-Campus 1\, Hamburg\, 21073\, Germany
CATEGORIES:11-05,Listening Room,Music
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260511T110000
DTEND;TZID=Europe/Amsterdam:20260511T173000
DTSTAMP:20260423T153918
CREATED:20260421T183941Z
LAST-MODIFIED:20260421T183941Z
UID:10000183-1778497200-1778520600@icmc2026.ligeti-zentrum.de
SUMMARY:Listening Room 1
DESCRIPTION:Plight of the Monarch\nSalvatore Siriano \nAxis of Frost\nLiuyang Tan and Tan Liuyang \nscanning for video\nKeisuke Yagisawa \nVeil-Audiovisual performance with real-time motion detection by Media Pipe\nYiting Shao \nEbow Supernova\nCristiano Riccardi \nInterwoven Realms: The Threefold Domain of Consciousness\nQing Ye and Yuxue Zhou \nOkinawa Blue Note\, Recalled\nYerim Han \nQuantum Sphere & Sound Sympathy — Composed for Guzheng and Quantum Computing\nWeijia Yang \nThe Orphic Shimmer onto the 192 Steps\nWanjun Yang \n Transcendence: Performance without Presence\nJinwoong Kim \nTriangulation\nTalia Amar\, Talia Amar and Talia Amar \nWhispers That Are Heard\nJingfan Guo \nLabyrinthe Souriant (Smiling Labyrinth)\nShih-Lin Hung and Ju An Hsieh \nEchoes of the dial\nYunpeng Li
URL:http://icmc2026.ligeti-zentrum.de/event/listening-room-1-1/
LOCATION:Hamburg University of Technology\, Building A (A 0.18)\, Am Schwarzenberg-Campus 1\, Hamburg\, 21073\, Germany
CATEGORIES:11-05,Listening Room,Music
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260511T133000
DTEND;TZID=Europe/Amsterdam:20260511T150000
DTSTAMP:20260423T153918
CREATED:20260421T084731Z
LAST-MODIFIED:20260423T113343Z
UID:10000077-1778506200-1778511600@icmc2026.ligeti-zentrum.de
SUMMARY:Lunch Concert 1A
DESCRIPTION:After the Opening Concert of ICMC HAMBURG 2026\, the regular music program begins today. This first Lunch Concert offers an insight into the current international computer music scene. What makes this event special is the personal presence of the artists: the composers are either on stage themselves or have brought the musicians they wrote for with them to Hamburg.\nIt is a program of short distances between idea and sound. The works demonstrate how diverse collaboration between humans and technology can be today—from the classical solo clarinet to interactive formats. \n  \nProgram Overview\nTyche\nSever Tipei \nAIKYAM\nClaudia Robles Angel \nHOTPO\nMichael Edwards \nTessellae\nRodrigo Cadiz \nThe Center of the Universe\nSunhuimei Xia and Sunhuimei Xia \n  \nAbout the pieces & artists\nSever Tipei: Tyche \nTyche for Bb clarinet and fixed media is a composition generated with original software for Computer-assisted (algorithmic) Composition and sound design developed by the composer and his collaborators.\nDivided into four main sections of 2-3-1-2 minutes\, the work utilizes stochastic distributions\, Markov chains\, sieves and Just Intonation as well as detailed control of spectra\, FM transients\, spatialization and reverberation. A basic framework of precise proportions and deterministic procedures are complemented by random details governed by Tyche\, the goddess of fortune\, chance\, providence and fate. \nAbout the artist\nA composer and a pianist\, Sever Tipei was born in Bucharest\, Romania\, and immigrated in the United States in 1972. He holds degrees in composition from the University of Michigan (DMA) and piano performance from Bucharest Conservatory (Diploma). Tipei taught at Chicago Musical College of Roosevelt University and\, between 1978 and 2021\, at the University of Illinois at Urbana-Champaign School of Music. After retirement Tipei continues to teach in the School of Information Sciences where he also directs the “James W. Beauchamp Computer Music Project”. He is also a National Center for Supercomputing Applications Faculty Affiliate. Between 1993 and 2003 Tipei was a Visiting Scientist at Argonne National Laboratory where he worked on the sonification of complex scientific data.\nMost of his compositions were produced with software he designed: MP1 – a computer-assisted composition program first used in 1973\, DIASS – for sound synthesis and M4CAVE – software for the visualization of music in an immersive virtual environment. More recently\, Tipei and his collaborators have developed DISSCO\, software that unifies computer-assisted (algorithmic) composition and (additive) sound synthesis into a seamless process. His compositions have been performed in the US\, Australia\, Brazil\, France\, Germany\, Italy\, Portugal\, Romania\, Spain\, United Kingdom and Taiwan. \n  \nClaudia Robles Angel: AIKYAM \nAIKYAM is a real-time surround sound work for 1 performer and 5 to 6 participants (audience) inspired by Kuramoto’s mathematical model of the spontaneous order or synchronisation system in nature\, e.g. fireflies\, heart rates or humans clapping their hands together. The term AIKYAM is based on the Sanskrit word: ऐक्यम\, and it means unity or harmony. \nAbout the artist\nBorn in Bogotá (Colombia)\, living in Cologne (Germany). Composer\, sound and new media artist\, her work covers different aspects of visual and sound art\, extending from acousmatic and audio-visual compositions to interactive performances/installations using biomedical signals and AI (Artificial Intelligence).\nShe has been Artist-in-residence in several outstanding institutions around the globe. In 2022 was awarded with an honorary mention by the GIGA Hertz award at ZKM Center.\nHer work has been performed and exhibited worldwide e.g. at ZKM\, ISEA; KIBLA Centre Maribor\, CAMP Festival – 55 Venice Biennale Salon Suisse\, ICMC; New York City Electroacoustic Music Festival; NIME; STEIM; Harvestworks Digital Arts Center NYC\, Heroines of Sound Berlin; Audio Art Festival Cracow; MADATAC Madrid; Athens Digital Art Festival ADAF\, CMMAS Morelia; Beast FEaST Birmingham; ICST ZHdK Zurich; RE:SOUND Aalborg; Electric Spring Festival Huddersfield; AI Biennal Essen; at the Centre for International Light Art Unna and more recently at Acht Brücken Festival Cologne and at the Philharmonie Essen. \nwww.claudearobles.de \n  \nMichael Edwards: HOTPO \nHinting at something a little more coarse\, the title HOTPO is in fact a completely innocent reference to the Collatz Conjecture. This mathematical proposition\, also known by other names\, refers to a succession of numbers called the hailstone sequence (or wondrous numbers)\, because their values usually ascend and descend like hailstones in a cloud.\nThough the mathematical proof of the conjecture is complex\, the proposition is very simple: Take any positive whole number; if it is even\, divide it by two; if it is odd\, multiply it by three and add one (hence the acronym Half Or Three Plus One: HOTPO); repeat the process with the result and you will find that no matter which number begins the process\, you will always\, given enough iterations\, reach one.\nThe algorithm is easy to programme and experiment with plus it produces rather nice images when given different starting numbers and plotted over various iterations. I used the algorithm in this piece to generate section lengths and repeated structures from nine basic rhythm sequences\, hence my sequence was 9 28 14 7 22 11 34 17 52 26 13 40 20 10 5 16 8 4 2 1. The piece alternates sections opposing mixed materials (odd section numbers) with obsessively repeated material (even). The numbers are also used for the generation of the sound files triggered during the performance. Despite the rather abstract nature of the generative procedure\, the results of the algorithms were developed intuitively and the piece as a whole arises out of and proceeds through a maelstrom of events fitting to the imagery of a hailstorm.\nHOTPO was commissioned by Henrique Portovedo for the World Saxophone Congress 2018 in Zagreb. That version included an ensemble. In 2020 I reworked the sound files to include MIDI data from the ensemble and made a solo + computer version. This was revised in 2024. \nAbout the artist\nI’m a composer\, improvisor\, software developer\, and since 2017 Professor of Electronic Composition at ICEM\, Folkwang University of the Arts\, Essen\, Germany.\nI’m the programmer of the slippery chicken algorithmic composition package. My compositional interests lie mainly in the development of structures for hybrid electro-instrumental pieces through the integration of algorithmically produced scored materials with similarly generated computer-processed sound. I also improvise on laptop\, saxophones\, and MIDI wind controller\, performing for instance at the 2008 Montreaux Jazz Festival.\nI studied composition at Bristol University with Adrian Beaumont (BA\, MMus) and privately with Gwyn Pritchard. In 1991 I moved to the US for further studies in computer music with John Chowning at CCRMA\, Stanford University (MA\, Doctor of Musical Arts). Whilst studying there I also worked at IRCAM\, Paris\, with a residence grant at Cité des Arts.\nDuring 1996-7 I was a consultant software engineer in Silicon Valley. I developed a Document Recognition System used in several US hospitals. In 1997 I was appointed Lecturer in Music Theory at Stanford but later that year moved to Salzburg\, Austria. I was Guest Professor at the Universität Mozarteum until I left to teach at the University of Edinburgh in 2002. \n  \nRodrigo Cadiz: Tessellae \nTessellae for percussion and live electronics unfolds as a mosaic of small rhythmic tiles laid in time by a single performer. The percussion writing is built on Euclidean rhythmic principles\, patterns that distribute events as evenly as possible\, expanded through asymmetric tuplets (notably groups of three and five)\, repetitions\, and carefully placed silences that create a strong sense of anticipation from phrase to phrase. Only one or two instrumental lines sound at a time\, allowing the listener to perceive each gesture as a discrete tessera within a larger rhythmic surface. The live electronics\, built on RAVE\, a real-time variational autoencoder developed at IRCAM and trained on a corpus of percussion sounds\, listen to the performer and respond by reshaping timbre and resonance in the moment\, extending and refracting the acoustic material without fixing it in advance. The result is a dialogue between strict rhythmic architecture and fluid sonic transformation\, where expectation\, delay\, and renewal are central expressive forces. Tessellae was composed for Thierry Miroglio. \nAbout the artist\nRodrigo F. Cádiz is a composer\, researcher and engineer. He studied composition and electrical engineering at the Pontificia Universidad Católica de Chile (UC) in Santiago and he obtained his Ph.D. in Music Technology from Northwestern University. His compositions\, consisting of approximately 70 works\, have been presented at several venues and festivals around the world. His catalogue considers works for solo instruments\, chamber music\, symphonic and robot orchestras\, visual music\, computers\, and new interfaces for musical expression. He has received several composition prizes and artistic grants both in Chile and the US. He has authored around 70 scientific publications in peer reviewed journals and international conferences. His areas of expertise include sonification\, sound synthesis\, audio digital processing\, computer music\, composition\, new interfaces for musical expression and the musical applications of complex systems. In 2018\, Rodrigo was a composer in residence with the Stanford Laptop orchestra (SLOrk) at the Center for Computer-based Research in Music and Acoustics (CCRMA)\, and a Tinker Visiting Professor at Stanford University. In 2019\, he received the prize of Excellence in Artistic Creation from UC\, given for outstanding achievements in the arts. In 2024\, he was a visiting researcher at the Orpheus Instituut in Belgium. He is currently full professor at the Music Institute and Electrical Engineering Department of UC. \n  \nSunhuimei Xia and Sunhuimei Xia: The Center of the Universe\nThe Center of the Universe\, an algorithmic music work integrated with interactive technology\, draws inspiration from the artist’s immersive impressions of New York City gleaned through multiple on-site visits. Standing atop the Empire State Building\, the artist perceived the metropolis as a dynamic global nexus where people of diverse cultural and ethnic backgrounds converge\, weaving a vibrant\, multifaceted urban tapestry that resonates with the energy of an interconnected world. Taking the phrase “The Center of the Universe” as its foundational sonic material\, the work delivers innovation through experimental multilingual vocal manipulation—deploying the core line in English\, Spanish\, French\, German\, Italian\, Russian\, Chinese\, Japanese\, Korean\, and Thai—with all vocal textures sourced from sampled macOS AI voices\, blending computational sound synthesis with linguistic diversity to push the conventional boundaries of vocal-based algorithmic composition. It achieves nuanced translation by converting the artist’s subjective perceptual experience of the city into an audible\, interactive sonic landscape\, while translating the abstract idea of cross-cultural convergence into tangible musical logic via the layered interplay of multilingual vocal samples. Further embodying participation\, the piece adopts wireless Nintendo Wiimote Controllers as its interactive performance interface\, enabling the performer to stand at the “center” of the stage and manipulate the musical structure in real time; this design redefines the dynamic between creator\, performer\, and audience\, turning the performance into a collaborative process where physical movements directly shape sonic evolution. \nAbout the artist\nSunhuimei Xia\, Associate Professor of Art and Technology at Wuhan Conservatory of Music’s Composition Department\, Dr. Xia holds a Master’s from Johns Hopkins University and a Doctorate from the University of Oregon (U.S.). Mentored by renowned composers Jian Feng\, Jian Liu\, Geoffrey Wright\, and Jeffrey Stolet.\nAs central and western China’s first DMA in data-driven musical instrument composition and performance\, this accomplished composer focuses on computer music creation and music-technology integration\, with core interests in interactive data-driven instruments\, algorithmic composition\, and data sonification.\nHonored as a Music Entrepreneurship and Innovation Talent by the Ministry of Culture and an Outstanding Young and Middle-Aged Literary and Art Talent by Hubei Federation of Literary and Art Circles\, her works won the Hubei Golden Bianzhong Music Award\, with over 10 pieces showcased at top global events including ICMC\, ISMIR\, NIME\, SMC\, SEAMUS\, NYCEMF\, EMM\, IRCAM\, WOCMAT and Musicacoustica-Beijing.\nShe released China’s first DVD album of data-driven instrument works\, published by Shanghai Music Publishing House and Shanghai Literature & Art Audio-Video Electronic Publishing House. She guided students to secure 20+ domestic and international awards\, leads provincial projects and participates in the Ministry of Education’s Humanities and Social Sciences Youth Fund Project\, driving music-technology innovation.
URL:http://icmc2026.ligeti-zentrum.de/event/concert-1a/
LOCATION:Hamburg University of Technology\, Building I\, Audimax 2\, Denickestraße 22\, Hamburg\, 21073\, Germany
CATEGORIES:11-05,Concert,Music
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260511T180000
DTEND;TZID=Europe/Amsterdam:20260511T190000
DTSTAMP:20260423T153918
CREATED:20260415T101813Z
LAST-MODIFIED:20260417T114349Z
UID:10000114-1778522400-1778526000@icmc2026.ligeti-zentrum.de
SUMMARY:[Off-ICMC] Concert: Nenad Nikolić – Accordeon meets Techno
DESCRIPTION:Photo: Boris Las Opolski\n  \nNenad Nikolić was born in Serbia and has always been fascinated by his father and grandfather’s accordion playing. But mechanical sounds are from the past. Nenad plays without backing tracks\, performing every single tone live—from “tango to techno.” Don’t miss this chance to see him push the boundaries of his instrument with his electronic accordion.  \nNo registration required  \n  \nThe Off-ICMC\nMusic is what brings us together\, even when everything else pulls us apart.\nMusic everywhere—it is part of our everyday lives. And yet\, we’re hearing it performed live on analog instruments less and less. Instead\, it often reaches us through speakers or headphones\, as files\, from the cloud. What does music mean to you? What does it sound like today? Where does it begin—and where does it end?\nThe ligeti center invites you to listen more closely and discover new sounds—to explore\, experiment\, and play. This year\, ICMC HAMBURG 2026 revives an old tradition: the Off-ICMC\, a free and accompanying festival curated for the general public and anyone curious about computer music. \nAll Off-ICMC events are free of charge.  \n\n  \n \n 
URL:http://icmc2026.ligeti-zentrum.de/event/off-icmc-concert-nenad-nikolic-accordeon-meets-techno/
LOCATION:Harburg Info\, Hölertwiete 6\, Hamburg\, 21073\, Germany
CATEGORIES:11-05,Music,Off-ICMC
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260511T190000
DTEND;TZID=Europe/Amsterdam:20260511T210000
DTSTAMP:20260423T153918
CREATED:20260421T085527Z
LAST-MODIFIED:20260422T112803Z
UID:10000079-1778526000-1778533200@icmc2026.ligeti-zentrum.de
SUMMARY:Evening Concert 1B
DESCRIPTION:This evening concert marks a special collaboration between the international ICMC community and Hamburg’s music scene. At its center is Ensemble 404 from the Hamburg University of Music and Drama (HfMT). For this occasion\, a video wall will be specially installed in the Friedrich-Ebert-Halle to highlight the synergy between sound and image.\nThe program ranges from intimate solo pieces with computer support to complex ensemble compositions and large-scale video works. \n  \nProgram Overview\nFantasy for Viola and Computer\nRichard Dudas \nNeuro Translation Engine\nVincenzo Russo \nClimate II for piano and computer \nRikako Kabashima \nWind Blown Rain\nMara Helmuth et al. \nDelicate Anticipation\nKotoka Suzuki \nAir-Carving Bamboo\nYu Chung Tseng \n  \nAbout the pieces & artists\nRichard Dudas: Fantasy for Viola and Computer\nThis work for solo viola and real-time audio processing in Max is a composed extension of some prior improvisational works using Max. It was written in part as an exploration of Bohlen-Pierce tuning (in the electronics)\, which divides the perfect twelfth into thirteen unequal justly-tuned steps. The viola part is pitted against this\, performing in standard twelve-unequal-steps-to-the-octave tuning\, juxtaposing and combining several different musical fragments\, each with its own character and mood. All sounds in the electronics are live: they are derived from the sounds of the on-stage violist. Max audio processing includes formant filtering to provide a vocal quality to the transposed and resonated viola sounds. \nAbout the artist\nRichard Dudas holds degrees in Music Composition from The Peabody Conservatory of Music of the Johns Hopkins University\, and from The University of California\, Berkeley. He additionally studied at the Franz Liszt Academy of Music in Budapest\, Hungary and the National Regional Conservatory of Nice\, France. In addition to composing music for acoustic instruments\, he has been actively involved with music technology since the late 1980s. As a computer musician\, he has taught courses at IRCAM\, and developed musical tools for Cycling ’74. Since 2007 he has been teaching music composition and computer music at Hanyang University in Seoul\, Korea. \n  \nVincenzo Russo: Neuro Translation Engine\nIn the future\, global societies remain marked by a multitude of languages\, dialects\, idiolects\, and diverse phonetic and cultural systems. Despite advances in AI-driven translation\, fundamental limits persist in the loss of emotional nuance\, imprecise interpretations\, and gaps between what is said and what is perceived. A team of computational linguists and neuroscientists develops an advanced artificial entity: the Neuro Translation Engine (NTE)\, capable of surpassing traditional textual or acoustic translation. The NTE does not translate words\, but the neural intentions behind language. It stimulates a specific area of the human brain\, the resonance cortex\, designed to receive universal neurosensory patterns. The result is a world where everyone can speak their native language while perfectly understanding others. Linguistic diversity is not diminished but enriched through mutual comprehension. The composition for ensemble and electronics illustrates how the NTE processes\, transforms\, and reconstructs communicative material. Through sound transformation techniques\, the acoustic material is dematerialized\, representing the machine’s “internal work”: the conversion of complex signals into a unified code. The final sound is entirely electronic\, devoid of recognizable references to the original ensemble. It forms a new language\, perceived as a pattern directly interpreted by the brain. \nAbout the artist\nVincenzo Russo (1995) holds a bachelor’s degree in Business Administration from the University of Naples “Parthenope.” He began his musical studies in Composition for Visual Media at the San Pietro a Majella Conservatory in Naples under the guidance of the late Maestro Lucio Lo Gatto. In July 2025\, he completed the second-level degree (Master’s degree) in Composition. Alongside his academic work\, he is active as a composer\, arranger\, and music producer\, working from his own recording studio. \n  \nRikako Kabashima: Climate II for piano and computer \nThis work was composed based on a variety of ideas inspired by climate change. In recent years\, translating insights from the natural world into my own compositions has become an important experiment in my creative practice.\nIn particular\, this piece draws inspiration from the rapid climate fluctuations caused by global warming\, a pressing issue worldwide. Each measure in the work is specified in seconds rather than traditional beats\, and there is no fixed meter. Within each measure\, rhythms are performed improvisationally according to the given duration.\nThis approach allows for different rhythms and nuances to emerge in every performance\, reflecting the ever-changing nature of the climate itself. \nAbout the artist\nRikako Kabashima was born in Kagoshima\, Japan\, in 1996. She began studying piano at the age of three and later pursued composition at Senzoku Gakuen College of Music in Tokyo. After completing her undergraduate studies in 2021\, she entered the master’s program in composition at Toho College of Music\, where she studied with Kazuro Mise and Hitomi Kaneko\, and explored computer music under the guidance of Takayuki Rai. She earned her master’s degree in March 2025.\nHer works have been selected at international festivals including the New York City Electroacoustic Music Festival (NYCEMF) in 2023\, the International Computer Music Conference (ICMC) in 2023\, 2024\, and 2025. \n  \nMara Helmuth et al.: Wind Blown Rain\nWind Blown Rain was inspired by natural processes and forces involving water. Water metamorphoses between many opposing states: from a gentle drizzle to a stormy downpour\, from a tiny droplet to a crashing ocean. Life on earth is dependent on water\, and also at its mercy. This piece focuses mainly on the transformed sounds of rain. Samples were recorded in Venice and Ascea\, Italy. The music was composed in Italy in the summer of 2025 at Wassard Elea Artist’s residency in Ascea by a computer music composer and a performer/real-time composer. While most of our collaborations have relied solely on the sound of the performer’s instrument for the computer part\, in this piece the instrumentalist interacts primarily with music created from natural recordings and their processed transformations. A third artist created the video part in response to the music from his own water-related video recordings. \nAbout the artists\nMara Helmuth (b. 1957)\, internationally known computer music composer/researcher\, received a Guggenheim Fellowship in 2025. Her research explores sonification\, granular synthesis\, wireless sensor networks\, Internet2\, and RTcmix. She is Professor at College-Conservatory of Music\, University of Cincinnati\, where she received the George Rieveschl Award for Scholarly / Creative Works at in 2023. She served on the International Computer Music Association board of directors and as President. D.M.A.: Columbia Univ.\, earlier degrees: Univ. Ill. U-C. \nEsther Lamneck\, Clarinet and Tarogato\nThe New York Times calls Esther Lamneck\, “an astonishing virtuoso”She has appeared as a soloist with major orchestras\, with renowned chamber music artists and an international roster of musicians from the new music improvisation scene. http://www.estherlamneck.com/ \nAlfonso Belfiore is a composer and visual artist whose work explores the relationships between sound\, image\, movement\, and perception. Former professor of electronic music at the Conservatories of Florence and Padua\, he has collaborated with international institutions\, creating performances\, sound installations\, and multidisciplinary projects that merge musical innovation with digital art. His recent work investigates memory\, dreamlike space\, and the fragile line between reality and imagination. \n  \nKotoka Suzuki : Delicate Anticipation\nThis work is written as part of the series “In Praise of Shadows\,” inspired by Junichiro Tanizaki’s essay of the same title\, written at the birth of the modern era in imperial Japan. The essay describes how shadows and negative space are integral to traditional Japanese aesthetics in music\, architecture\, and food\, extending even to the design of everyday objects. As Tanizaki explains\, “We find beauty not in the thing itself but in the patterns of shadows\, the light and the darkness\, that one thing against another creates… Were it not for shadows\, there would be no beauty.” \nThe focus of the first of its sequence\, “In Praise of Shadows” for three paper players and electronics is placed on the collective loss of the tangible in our modern life\, analogues to how the excessive illumination of Edison’s modern light affect Japanese aesthetics and culture. Following this work\, “Orison” is composed for three music box players and electronics. The work is further inspired by the voices of children of war\, both from past and present\, speaking and singing about hope\, peace as well as sorrows arising from their personal experiences. These melodies\, presented as empty spaces on the music score\, reveal as they are fed through the music boxes. \nIn the third part of the sequence\, “Delicate Anticipation\,” written for a solo percussionist\, electronics\, and lights\, shadow is the central focus\, honouring the “patterns of shadows\, the light and the darkness\, that one thing against another creates”. Positioned behind the scrim\, the percussionist is only visible as a shadow while performing with lights and instruments primarily of metal and skin\, manipulating patterns of carefully choreographed shadows. The title derives from the English translation of the essay\, which describes the sensation of gazing at the silent liquid in the dark depths of a Japanese lacquerware bowl. As Tanizaki writes\, “What lies within the darkness one cannot distinguish…. …the fragrance carried upon the vapor brings a delicate anticipation.” \nAbout the artists\nKotoka Suzuki’s work engages deeply with the visual\, conceiving of sound as a physical form to be manipulated through the sculptural practice of composition. Artists such as the Arditti Quartet\, Eighth Blackbird\, Nouvel Ensemble Moderne\, and Mendelssohn Chamber Orchestra (Leipzig) have featured her work internationally through numerous venues and broadcasts\, including BBC Radio 3\, Schweizer Radio\, Lucerne Festival\, Heroin of Sound Festival\, Ultraschall\, and ZKM Media Museum. Suzuki is currently an Associate Professor at the University of Toronto. \nMichael Murphy is a Chinese-Canadian percussionist praised by The New York Times\, Opera Canada\, and The Herald. He has toured across North America\, Europe\, Scandinavia\, and Asia\, performing with ensembles including the Toronto Symphony Orchestra\, the National Ballet of Canada Orchestra\, and Philharmonisches Orchester Freiburg. A leading advocate for new music\, he has premiered concertos by Alice Ping Yee Ho\, Liam Ritz\, and Bob Becker and champions contemporary repertoire internationally. \n  \nYu Chung Tseng: Air-Carving Bamboo \n“Air-Carving Bamboo Music” premiered at the 2025 C-LAB Sound Arts Festival_DIVERSONICS . This work is an Acousmatic / electroacoustic music. The material comes from the composer’s field recordings of bamboo colliding on the shores of Emei Lake in his hometown of Hsinchu County in Taiwan. Through editing and transformation using DAW software\, and incorporating feedback material from AI Somax 2 on some of the bamboo collision rhythms\, the work was finally organized into an electroacoustic music piece.\nIn terms of performance style\, the composer wanted to differentiate themselves from traditional purely played electroacoustic music\, creating a synesthetic aesthetic experience for both the ears and eyes\, and letting electroacoustic music visible .\nThe composer invited percussionist Hsieh Yi-chieh to wave glow sticks in the dark\, as if drawing out or sculpting the electroacoustic music in air\, a technique akin to “grabbing music from a distance.” This presentation method\, besides giving electroacoustic music a performative quality\, greatly enhances the visual appeal\, auditory appeal\, and sonic dramatic tension of the performance. Postscript: Having composed electroacoustic music for more than 2 decades\, the composer occasionally wants to dabble in this area\, slightly transcending the aesthetic/philosophical view of “sound-only/purely auditory” in Acousmatic / electroacoustic music listening. \nAbout the artist\nYu-Chung Tseng\, receiving his DMA from UNT in Texas\, is a professor of electronic music composition and serves as the director of multi-channel Sound Lab at Institute of Music at National Yang Ming Chiao Tung University(NYCU) in Taiwan. \nHis music\, written for both acoustic and electronic media\, has been recognized with selection/awards from Pierre Schaeffer International Computer Music Competition (1st Prize/2003)\, Città di Udine International Contemporary Music Competition\, Musica Nova (First Prize/2010)\, Metamorphoses\, International Computer Music Conference(ICMC\, Best Music Award/2011/2015/2022)\,Taukay Edizioni Musicali call for Acousmatic Music(Winner/2019)\, and RMN Classical Electroacoustic call for work(Winner/2023)\,Polish International Electroacoustic Music Competition (Finalist/2023)\, KLANG International Acousmatic Composition Competition(Second Prize/2023) \, and Musica Nova (First Prize/2010). \n 
URL:http://icmc2026.ligeti-zentrum.de/event/concert-1b/
LOCATION:Friedrich-Ebert-Halle\, Alter Postweg 34\, Hamburg\, 21075\, Germany
CATEGORIES:11-05,Concert,Music
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260511T213000
DTEND;TZID=Europe/Amsterdam:20260511T233000
DTSTAMP:20260423T153918
CREATED:20260421T145800Z
LAST-MODIFIED:20260423T123235Z
UID:10000067-1778535000-1778542200@icmc2026.ligeti-zentrum.de
SUMMARY:Club Concert 1C
DESCRIPTION:Immerse yourself in a 20.8-channel sound world: in the Production Lab of the Ligeti Center\, neural synthesis\, artificial intelligence\, and interactive visuals merge into an immersive live experience. International artists present innovative prototypes—from AI-augmented string instruments to dynamic graphic scores. \n  \nProgram Overview\nZwischenheit \nRiccardo Ancona \nKnitting\nBrian Lindgren \nSonic Memories: A Live Coding Performance with Machine-Learned Sound Fragments\nRiccardo Mazza \nGradient Noise: Animated Scores with Corresponding Data Streams\nJohn C.S. Keston \nFluid Ontologies\nNicola Leonard Hein and Viola Yip \nOn The Edge\nKasey Pocius \nScarittera – Subterranean Eruptions of Sonic Memory\nDanilo Randazzo \n\n\n  \nAbout the pieces & artists\nRicardo Ancona: Zwischenzeit \nCosmologies 3 situates the listener inside a virtual grand piano to experience its secret inner life. The piano interior\, recorded with a spherical microphone array\, is complemented by three-dimensional (3-D) field recordings of Paris’s Place Igor Stravinsky. These recordings are highlighted and underlined with computer synthesis using artificial intelligence (AI) to reproduce the spatial presence of acoustic instruments\, while the microcosm of the piano’s inner space expands larger-than-life. \n\nCosmologies 3 is part of a modular series of works that use AI to inform sound spatialization. The situated spatial presence of musical instruments has been well studied in the fields of acoustics and music perception research\, but so far has not been the focus of research on AI and music. Cosmologies seeks to “re-embody” recorded sound using data derived from natural acoustic phenomena in an immersive sonic environment where real and virtual sources blend seamlessly. Cosmologies 3 for Ambisonic fixed media may be performed on its own or directly following Cosmologies for piano and 3-D electronics\, with the fixed media work beginning as the live performer leaves the stage. Although the human–AI interaction in the fixed work is no longer live\, it remains as a trace of the work’s creation process\, refracting the human performer’s presence behind the spatial audio recordings (see Fig. 1). \nCosmologies is among the first works to connect audio descriptor analysis and corpus-based syn- thesis to 3-D spatialization using Higher-Order Ambisonics (HOA) and machine learning (ML). At the same time\, it is the first project connecting the computer programs Max\, Python\, and OM# (Bresson et al. 2017) with the associated packages Spat (Carpentier 2018) and Mubu (Schnell et al. 2009). These software tools are used to draw upon natural acoustic phenomena as source material for spatial sound derived from two sources: one is a 3-D microphone array\, the EM32 Eigenmike by mh acoustics (https://mhacoustics.com/products)\, a 32-channel array used to capture 3-D piano samples as well as ambient field recordings. The other source is generative spatial sound synthesis produced through ML of an existing large database of radiation measurements for acoustic instruments (Shabtai et al. 2017; Weinzierl et al. 2017). This database serves as a training set for ML models to control spatially rich 3-D patterns for electronic synthesis. These two sources of spatial sound are intentionally overlapped and fused so the listener cannot easily distinguish or segregate the sources. The aesthetic goal is to create a setting for curious and detailed listening\, where one may not discern the “sleight of hand” between the superposed 3-D spaces of the sample recordings and computer synthesis. \nAbout the artist\nAaron Einbond’s work explores the intersection of instrumental music\, field recording\, sound installation\, and interactive technology. He released portrait albums Cosmologies with the Riot Ensemble\, Without Words with Ensemble Dal Niente\, and Cities with Yarn/Wire and Matilde Meireles. His awards include a Giga-Hertz Förderpreis\, a Guggenheim Fellowship\, and artistic-research residencies at IRCAM and ZKM. He teaches music composition and technology at City St George’s\, University of London. \n  \nBrian Lindgren: Knitting \nKnitting is a new work for the EV\, an augmented bowed string instrument that integrates IRCAM’s RAVE (Realtime Audio Variational autoEncoder) neural synthesis model. The composition explores how machine learning can extend the timbral vocabulary of a traditional gestural practice—not by imposing external sonic material\, but by folding the instrument’s own acoustic identity back through a neural lens. \nThe EV combines a 3D-printed body with four infrared optical pickups whose signals are processed by a Bela board and transmitted to a laptop running Pure Data. Each string controls an independent synthesis engine comprising convolution\, physical modeling\, granular processing\, reverb\, and ambisonic spatialization. The recent addition of RAVE introduces a self-referential pathway: the model was trained on four hours of the EV’s own recordings\, creating a system that listens to itself through learned representations of its sonic history. \nCentral to this integration is a control strategy that maps performance descriptors—fundamental frequency\, amplitude\, and spectral centroid—to specific dimensions of the model’s eight-dimensional latent space. By constraining each modulation source to a single latent dimension\, the relationship between gesture and neural response becomes legible: a shift in bow pressure or position translates into a navigable timbral trajectory rather than an opaque transformation. This approach distinguishes the EV from other RAVE-integrated instruments\, which often emphasize loop-based or tabletop interfaces rather than continuous bowed-string control. \nKnitting treats this latent space as a landscape of sonic possibility\, each dimension a potential resonance between physical gesture and synthesized response. The compositional process is less one of arranging fixed materials than of cultivating emergent textures—drawing out sonic filaments\, crossing and interlacing them\, balancing tensions across the tapestry. The neural model functions as a meta-resonator: a parallel pathway that refracts the instrument’s timbral identity through an alternate causal route\, revealing aspects of its sound that remain latent in conventional electroacoustic processing. \nThe work demonstrates how neural synthesis can be embedded within a hybrid instrument ecology\, extending expression beyond pitch and amplitude to make performance descriptors direct agents of timbral transformation. By grounding latent navigation in the acoustic features of bowed-string technique\, Knitting positions machine learning not as a replacement for embodied practice but as an expansion of its expressive range. \nAbout the artist\nBrian Lindgren (1983) is a composer\, researcher\, violist\, and instrument builder whose work explores the convergence of acoustic performance and digital synthesis through the EV\, a hybrid string instrument integrating lutherie and embedded computing. \nHis compositions and research have been featured at the International Computer Music Conference (ICMC)\, New Interfaces for Musical Expression (NIME) conference\, Conference on Neural Information Processing Systems (NeurIPS)\, Society for Electro-Acoustic Music in the United States (SEAMUS)\, IRCAM Forum\, and International Conference on Auditory Display (ICAD)\, as well as published in Organised Sound. His work has been performed by ensembles including HYPERCUBE\, LINÜ\, Popebama\, and Tokyo Gen’on Project. \nThe EV was a finalist in the 2026 Guthman Musical Instrument Competition and used to compose ‘two tales from the shadows of the grid’ which won first place at the IEEE Big Data 2025 3rd Workshop on AI Music Generation Competition. \nLindgren holds an MFA in Sonic Arts from Brooklyn College (Subotnick\, Geers\, Gimbrone)\, a BA from the Eastman School of Music (Graham)\, and is pursuing a PhD at the University of Virginia (Burtner). \n  \nRiccardo Mazza: Sonic Memories: A Live Coding Performance with Machine-Learned Sound Fragments \nDrawing from Henri Bergson’s concept of *durée* and Deleuze’s rhizomatic models\, “Sonic Memories” reimagines memory not as a linear chronological archive\, but as a stratified field of coexisting planes. In this live coding performance\, autobiographical sound fragments—from mechanical gears to lagoon soundscapes and fragile voices—are liberated from their timeline and reorganized by an autoencoder into a non-hierarchical\, navigable map. \nThe performance begins with the simple act of loading a personal audio file—a field recording from a journey\, a voice memo\, a musical fragment—into a computational system that immediately begins to analyze and reorganize these sonic memories according to its own logic. \nOn stage\, the audience sees everything: the code acting in real-time\, a visual map where memories become points in space\, oscilloscopes showing the transformation of sound waves. This transparency is essential—there is no mystification of the technological process\, but rather an invitation to witness the negotiation between human remembering and algorithmic interpretation. \nThe performer navigates this latent space using SuperCollider and FluCoMa\, triggering both the original “concrete” traces and their AI-generated “distorted echoes.” The algorithm serves not as an autonomous agent\, but as a refracting lens\, forcing the performer to negotiate between faithful recall and neural hallucination. The result is a fragile dialogue between the fixity of the past and the malleability of the present\, exploring how computational tools can actualize memory as a living\, reconstructive act. \nThe work asks: How do we perform memory in an age of machine learning? Not by having machines remember for us\, but by creating dialogues with computational systems that reorganize our experiences according to their own logic\, forcing us to rediscover our own histories through unfamiliar maps. \nAbout the artist\nRiccardo Mazza (Turin 1963). Composer\, multimedia artist\, and faculty member at the Scuola di Alto Perfezionamento Musicale di Saluzzo. He collaborates with SMET (Electronic Music School) at the Conservatorio di Torino and the Conservatorio Ghedini in Cuneo\, and is internationally recognized for his research in psychoacoustics and spatial audio.\nIn 1997 he began a collaboration with Franco Battiato\, focusing on new technologies for sound. Between 1999–2000 he created the Renaissance SFX library\, the first Dolby Surround encoded spatial effects and field recording collection for cinema and television. Later developed SoundBuilder\, software for object-based surround design presented at AES 2003 in San Francisco\, which anticipated Dolby Atmos.\nHe founded Interactive Sound in 2001\, a research studio dedicated to multimedia exhibitions and immersive installations\, and in 2003 patented a psychoacoustic model of “sleep waves.” With Laura Pol\, he co-founded Project-TO (2015)\, an electronic and visual project that has released four albums and appeared at major festivals including TFF\, TJF\, Robot\, Share Festival.\nSince 2018\, he directs Experimental Studios in Turin\, one of Europe’s leading Dolby Atmos recording facilities. His current project Sonic Earth explores environmental sonification and algorithmic composition\, and has been presented internationally at ICMC 2025 in Boston\, FARM/SPLASH 2026 in Singapore\, SBCM 2025 (Brazil)\, IEEE 2025 (L’Aquila). \n  \nJohn C.S. Keston Gradient Noise: Animated Scores with Corresponding Data Streams\nSince 2019 I have been composing animated graphic scores for ensembles and soloists. These generative works are projected for both the performers and audience to experience. Custom software runs during the performance to create the computer graphics and geometric forms. Rules are established on how the forms are read\, but improvisation and the emotional response of the performer still play an integral part in each piece. Fixed media of this work does not suffice because it lacks the realtime\, generative\, and participatory aspects that create surprise and challenges for the performers. \nMore recently I began composing scores that not only generate animated visuals\, but also stream corresponding MIDI data that impacts the timbre and signal processing of the electronic instruments used by the performers. The instruments are either hardware based synthesizers or virtual instruments within a DAW such as Ableton Live. One of my recent compositions applies these streams of data to four layers of FM synthesis engines running within the Dirtywave M8\, a technically advanced\, modern\, hardware tracker. \nMy newest work in progress\, Gradient Noise\, translates values generated by the Perlin noise algorithm into independent layers of seamless loops repeating at variable intervals. These loops are visualised as geometric forms\, abstract visualisations\, and evolving structures. The data generated is innovative because although aleatoric\, the values can be tuned to range between slowly moving gradients or rapid\, angular forms. When the sound and visuals are synchronized the performer responds not only to the animation but also to the changes in the timbre of their instruments. \nThe debut of Gradient Noise will address the themes of Innovation\, Translation\, and Participation by rethinking the relationships between musicians and machines. By translating the properties of n-dimensional Perlin noise into a musical language\, the piece presents a unified ecosystem with coordinated timbres and geometric forms. The innovation lies in generating a living environment that requires active participation and improvisation in contrast to static notation. Ultimately\, the work presents a contemporary model for computer music where the performer does not simply follow a score\, but negotiates a path through a responsive\, multi-sensory experience. \nAbout the artist\nJohn C.S. Keston is an award winning transdisciplinary artist reimagining how music\, video art\, and computer science intersect. His work both questions and embraces his backgrounds in music technology\, software development\, and improvisation leading him toward unconventional compositions that convey a spirit of discovery and exploration through the use of graphic scores\, chance and generative techniques\, analog and digital synthesis\, experimental sound design\, signal processing\, and acoustic piano. Performers are empowered to use their phonomnesis\, or sonic imaginations\, while contributing to his collaborative work. Keston founded the sound design resource\, AudioCookbook.org\, where you will find articles and documentation about his projects and research. \nJohn has spoken\, performed\, or exhibited original work at SEAMUS (2025)\, Radical Futures (2024)\, New Interfaces for Musical Expression (NIME 2022)\, the International Computer Music Conference (ICMC 2022)\, the International Digital Media Arts Conference (iDMAa 2022)\, International Sound in Science Technology and the Arts (ISSTA 2017-2019)\, Northern Spark (2011-2017)\, the Weisman Art Museum\, the Montreal Jazz Festival\, the Walker Art Center\, the Minnesota Institute of Art\, the Eyeo Festival\, INST-INT\, Echofluxx (Prague)\, and Moogfest. In 2017 he was commissioned by the Walker Art Center to compose music for former Merce Cunningham. He has appeared on more than a dozen albums\, solo albums\, and collaborative works. \nNicola Leonard Hein and Viola Yip: Fluid Ontologies\nIn “Fluid Ontologies”\, Transsonic (Nicola Leonard Hein and Viola Yip) continues to expand their intermedial artistic practice in performances. For this project\, they developed their laser feedback instruments\, using lasers as sound sources and solar panel microphones. With the incorporation of multichannel spatialization\, Transsonic extends the spatial dimensions\, sonically and visually\, creating a unique audiovisual experience. The project explores and defines new concepts of the instrumentality of light in audio circuits\, bringing together space\, bodies\, and instruments into a dynamic feedback system. \nAbout the artists\nDr. Nicola L. Hein is a sound artist\, guitarist\, composer\, researcher\, programmer\, and professor of Sound Arts and Creative Music Technology at the University of Music Lübeck.\nHe works with A.I.-assisted human-machine interaction\, postdigital lutherie\, intermedia\, sound installations\, augmented reality\, network music\,and spatial audio. His works have been realised in more than 30 countries\, at festivals such as MaerzMusik Festival\, Sonica Festival\, Experimental Intermedia etc. \nDr. Viola Yip is an experimental performer\, sound artist and instrument builder.\nHer work have been presented and supported by places such as Stanford University\, UC Berkeley\, Harvard University\, Cycling ‘74 Expo\, Hong Kong Arts Center\, Academy of Media Arts Cologne\, Academy of the Arts Berlin\, KTH Royal Institute of Technology Sweden\, Elektronmusikstudion EMS Stockholm\, NOTAM Oslo\, Arter Museum Istanbul\, Serralves Museum of Contemporary Arts Porto and Pinakothek der Moderne in Münich. \nviolayip.com \n  \nKasey Pocius: On The Edge \nOn the Edge is an audiovisual work for video\, T-Stick and surround sound. This audiovisual work explores sounds and images of objects often on the edges of perception our perceptions\, as well as processing and results from edge cases in musical algorithms and technology. \nThe piece consists of four interlayered vignettes\, exploring the behaviour and textural qualities of various edge and peak detection algorithms to create the fixed media. These files are then used for the corpus for the granular synthesis controlled by the T-Stick. The gestural data from the T-Stick is sent from Max to Ossia\, where it is used to manipulate the treatment of the video clips in real-time. \nThe technical aspects of the work consist of a fixed-media ambisonic file\, with real-time manipulation of video clips (in Ossia Score) and multichannel granular synthesis (in Max) controlled by the T-Stick. \nAbout the artist\nKasey Pocius is a gender-fluid intermedia artist and researcher based in Montreal\, teaching at Concordia and active with CIRMMT\, IDMIL\, LePARC\, and GRMS. They create electroacoustic and audiovisual works that explore interactive electronics\, spatial sound and collaborative improvisation\, with pieces programmed globally from DIY spaces to Harvard. \n  \n\n\n\n 
URL:http://icmc2026.ligeti-zentrum.de/event/club-concert-1c/
LOCATION:ligeti center\, Production Lab (10th floor)\, Veritaskai 1\, Hamburg\, 21079\, Germany
CATEGORIES:11-05,Club Concert,Music
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260512T110000
DTEND;TZID=Europe/Amsterdam:20260512T173000
DTSTAMP:20260423T153918
CREATED:20260421T181755Z
LAST-MODIFIED:20260421T181755Z
UID:10000185-1778583600-1778607000@icmc2026.ligeti-zentrum.de
SUMMARY:Listening Room 2
DESCRIPTION:Inner Line\nHyewon Kim \nA Portrait of Kwesi Brookins\nRodney Waschka \nBiomimicry\nChun-Han Huang \nDear Beginner\nVadim D. Genin \nEncircled\nAdam Stanovic \nmight have seen\nTakumi Harada \nSilence\nZiyu Pang \nTemporal Shards\nRay Tsai \nWhen An Android Becomes Obsolete\nGiancarlo Alfonso
URL:http://icmc2026.ligeti-zentrum.de/event/listening-room-2-2/
LOCATION:Hamburg University of Technology\, Building A (A 0.14)\, Am Schwarzenberg-Campus 1\, Hamburg\, 21073\, Germany
CATEGORIES:12-05,Listening Room,Music
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260512T110000
DTEND;TZID=Europe/Amsterdam:20260512T173000
DTSTAMP:20260423T153918
CREATED:20260421T184536Z
LAST-MODIFIED:20260421T185534Z
UID:10000180-1778583600-1778607000@icmc2026.ligeti-zentrum.de
SUMMARY:Listening Room 1
DESCRIPTION:Seething Field: Imprint\nSam Wells \nContours of Anxiety\nZihan Wang and Wenxin Zhou \nDreams of the Jailed Refugee\nRobert Sazdov \nflusso_sonoro_1\nSebastiano Naturali \nGalactic Railroad\nYunze Mu \nIdeale Landschaft Nr. 6\nClemens von Reusner \nIncarnations\nYoungjae Cho \nInformation Body Horror\nPrimrose Ohling \nJardín de Luz\nIván Ferrer-Orozco \nNon è un atlante di traiettorie algo-siderali\nAndrea Laudante\, Paolo Montella and Giuseppe Pisano \nNor’wester\nTeerath Majumder \nOcean Reflection\nYu Qin \nrain contained\, rain contains…\nWei Yang \nSAW\nGabriel Araújo \nWild Fruits: Epilogue\nJames Harley \nInside the metal plate\nRaul Masu and Francesco Ardan Dal Ri
URL:http://icmc2026.ligeti-zentrum.de/event/listening-room-1-2/
LOCATION:Hamburg University of Technology\, Building A (A 0.18)\, Am Schwarzenberg-Campus 1\, Hamburg\, 21073\, Germany
CATEGORIES:12-05,Listening Room,Music
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260512T133000
DTEND;TZID=Europe/Amsterdam:20260512T150000
DTSTAMP:20260423T153918
CREATED:20260421T165721Z
LAST-MODIFIED:20260422T112139Z
UID:10000176-1778592600-1778598000@icmc2026.ligeti-zentrum.de
SUMMARY:Lunch Concert 2A
DESCRIPTION:The second lunch concert of ICMC HAMBURG 2026 takes listeners on a journey through different cultures and technological approaches. The focus is on transformation: how are traditional instruments\, natural sounds\, or even everyday noises reinterpreted through the lens of computer technology and artificial intelligence?\nThe international composers are once again partly supported by Hamburg’s Ensemble 404\, which bridges the gap between academic composition and vibrant performance. \n  \nProgram Overview\nSprinkle\nHuixin Xue \nLate Shift \nBenjamin Broening \nFall and Rise\nWan Heo\, Wan Heo and Wan Heo \nSqueakeasy \nJonathan Wilson \nI dreamed of Naïma \nChristopher Dobrian \nFree-Wheelerish (a movement from the suite Things Ain’t What They Used To Be)\nMark Whitlam \n  \nAbout the pieces & composers\nHuixin Xue: Sprinkle\nThis piece seeks to explore new timbres and performance techniques for the pipa\, aiming to integrate the language of electronic music with the instrument’s sound in order to present a novel acoustic effect.\nThe pipa uses an unusual strings A #D E #G. \nAbout the artists\nComposer: Huixin Xue\nPipa Performer: Yinghan Liu\nComputer Music Designer: Shihong Ren \nHuixin Xue is a Chinese composer\, music producer and Music AI researcher. She is a Ph.D. candidate in Music AI at Shanghai Conservatory of Music\, an exchange student at the Hamburg University of Music and Theatre. She graduated from the Music Engineering Department of Shanghai Conservatory of Music both for her bachelor’s and master’s degrees.\nHer pieces won numerous awards\, including The Honorable Mention of the 2024 Sound Chain International Electronic Music Composition Competition (the only Chinese winner among the 6 winners worldwide). Her work was presented at the 2025 ICMC. Her pieces have been performed at major festivals. She also has participated in over twenty commercial music creation projects.\nDuring her doctoral studies\, she participated in the development of the AI Music Therapy Pod at the Shanghai Conservatory of Music\, co-developed SongEval\, the first aesthetic evaluation dataset for AI-generated songs\, and contributed to organizing the Automatic Song Aesthetic Evaluation Challenge at ICASSP 2026. \n  \nBenjamin Broening: Late Shift\nLate Shift explores the liminal light of dusk as shadows lengthen\, the bright colors of day darken\, and the familiar world is gradually transformed. A comparable transformation takes place in Late Shift: the flute and electronics slowly descend to lower registers over the course of the piece as flute sounds are gradually replaced by whispering percussion sounds in the electronics. \nAbout the artist\nBenjamin Broening’s music has been called “adventurous\, thoughtful\, eloquent\, and disarmingly direct.” His orchestral\, choral\, chamber and electroacoustic music has been performed in over twenty-five countries and across the United States by many soloists and ensembles. \nBroening is recipient of Guggenheim\, Howard and Fulbright Fellowships\, and has also received recognition and awards from the American Composers Forum\, Virginia Commission for the Arts\, ACS/Andrew Mellon Foundation\, the Jerome Foundation and the Presser Music Foundation among others. \nTrembling Air\, a Bridge Records release of his chamber music recorded by Eighth Blackbird\, has been praised as “haunting” and “enchanting” (Cleveland Plain Dealer)\, “magical” (Fanfare)\, “other-worldly” (Gramophone)\, and “coruscatingly gorgeous” (CD Hotlist). Critics have called Recombinant Nocturnes\, a disk of music for piano recorded by Duo Runedako “ breathtaking” (World Music Report) and “deep\, troubling” (François Couture). Nineteen other pieces have been released by Ensemble U: in Estonia and on the Centaur\, Everglade\, Equilibrium\, MIT Press\, Oberlin Music\, Open G\, Métier\, New Focus\, Ravello and SEAMUS record labels. \nBroening is founder and artistic director of Third Practice\, an annual festival of electroacoustic music at the University of Richmond\, where he is Professor of Music. He holds degrees from the University of Michigan\, Cambridge University\, Yale University\, and Wesleyan University. \n  \nWan Heo\, Wan Heo and Wan Heo: Fall and Rise\nFall and Rise is the second episode of my previous solo cello piece\, When It Falls. Drawing from the same inspiration\, which was the fallen leaves on the ground at Jeolmul Forest in Jeju Island\, Korea\, with a variety of colors and shapes\, this version for amplified violine and electronics focuses more on the timbre of the instrument. Particularly\, transitions between normal to harmonics\, different fingerings\, and how they create different textures and sonorities. \nRecording of When It Falls and field recordings from Jeolmul Forest were processed using modular synthesis\, creating certain atmosphere to the piece. Pitch and rhythmic materials for the violin was extracted from spectral analysis of the recordings which gives the sonic coherence to the three different sound sources. \nAbout the artist\nWan Heo is a Korean-born composer based in Chicago. Her works have been performed internationally in South Korea\, Germany\, Italy\, Singapore\, Spain\, and throughout the United States. Her percussion solo Unveiled Future is published by Alfonce Production. \nWan’s music has been commissioned and featured by Darmstädter Ferienkurse\, SEAMUS\, Yarn/Wire\, VIPA\, among others. She received an Honorable Mention for the Christine Clark/Theodore Front Prize in the IAWM New Music Search. \nHer doctoral dissertation explores the vulnerability of South Korea’s sonic environments through field recordings made at Buddhist mountain monasteries. Works from this project have been presented at NYCEMF\, the Composition in Asia Conference\, and NSEME. \nWan is a Visiting Assistant Professor at Wake Forest University. She holds a B.M. in Composition from Ewha Womans University and an M.M. in Composition from Florida State University. She is currently ABD in the Ph.D. program in Composition and Music Technology at Northwestern University\, where she works under the guidance of Alex Mincek\, Stephan Moore\, and Jay Alan Yim. \n  \nJonathan Wilson: Squeakeasy \nSqueakeasy was written for Maja Cerar during the COVID-19 pandemic from late 2020 to the early summer of 2021. The composition was conceived from my accidental discovery of a metallic chair that was loosely bolted to a metal patio set and could pivot in such a way to create an ear-piercing\, yet irresistible screech. The timbral qualities of that chair intrigued the composer to determine the various sonic transformations that could be realized after recording that initial sound\, which quickly led to pairing the electronics with the violin because of the multimbral similarities observed between them. Additional recordings of squeaky wooden surfaces\, such as a wooden chair and floorboards\, were included to enhance the timbral relationships between violin and electronics. The composer’s decision to explore their timbral relationships was partly inspired by Denis Smalley’s “Base Metals” by relating metal-based and wood-based sound families from the electronics to different violin timbres or extended techniques such as col legno\, glissando\, tremolo\, pizzicato\, ricochet\, and natural and artificial harmonics. The structure of this composition alternates between sections with performer + electronics and cadenzas with amplified violin\, which could be loosely described overall as a concertino for amplified violin based on the virtuosic elements of the violinist’s performance. The sound of the violin is amplified throughout the work by the electronic performer’s patch that was programmed on Max/MSP. The performer of the electronics triggers each instance of fixed media from the laptop while the performer follows both the score and a counter/timer that is displayed on a separate computer monitor. \nAbout the artist\nDr. Jonathan Wilson’s works have been performed at the Ann Arbor Film Festival\, European Media Art Festival\, ICMC\, SICMF\, SEAMUS\, NYCEMF\, MUSELAB\, NSEME\, Napoleon Electronic Music Festival\, Iowa Music Teachers Association State Conference\, and Midwest Composers Symposium. He is the winner of the 2014 Iowa Music Teachers Association Composition Competition. Jonathan has studied composition with Lawrence Fritts\, Josh Levine\, David Gompper\, James Romig\, James Caldwell\, Paul Paccione\, and John Cooper. In addition\, studies in conducting have been taken under Richard Hughey and Mike Fansler. Jonathan is a member of Society of Composers\, Inc.\, SEAMUS\, ICMA\, and the Iowa Composers Forum. \n  \nChristopher Dobrian: I dreamed of Naïma\nI Dreamed of Naïma for vibraphone and interactive computer system references a composition by John Coltrane in fragmented and distorted fashion\, as if recollected in a dream. The computer program\, written in Max for Live\, senses the sound of the vibraphone\, and algorithmically adds its own sounds to extend and elaborate the instrumental sound. The 7-minute piece mixes composition and improvisation\, with the computer performing interactively and responsively (with no attending technician needed)\, such that each performance is unique. \nAbout the artist\nChristopher Dobrian is Professor Emeritus of Integrated Composition\, Improvisation\, and Technology in the Department of Music\, with a joint appointment in the Department of Informatics\, at the University of California\, Irvine. He is a composer of instrumental and electronic music\, and taught courses in composition\, theory\, and computer music. He conducts research on the development of artificially intelligent interactive computer systems for the cognition\, composition\, and improvisation of music. He has published technical and theoretical articles on interactive computer music\, and is the author of the original reference documentation and tutorials for the Max\, MSP\, and Jitter programming environments by Cycling ’74. He holds a Ph.D. in Composition from the University of California\, San Diego\, where he studied composition with Joji Yuasa\, Robert Erickson\, Morton Feldman\, and Bernard Rands\, computer music with F. Richard Moore and George Lewis\, and classical guitar with the Spanish masters Celin and Pepe Romero. Dobrian has been an invited Fulbright specialist at the Korean National University of Arts\, the University of Paris-Sorbonne\, McGill University in Montreal\, and the Accademia Chigiana in Siena\, and has been a guest professor at Yonsei University\, Taiwan National Normal University\, University of Paris 8\, and the National University of Quilmes in Argentina. \n  \nMark Whitlam: Free-Wheelerish (a movement from the suite Things Ain’t What They Used To Be)\nThe movement from a longer suite—titled in reference to Duke Ellington’s big band jazz classic\, released over sixty years ago—offers a gentle provocation\, contrasting traditional approaches to jazz improvisation with emerging paradigms in human–AI interaction. Combining real-time machine learning and deep learning tools\, the piece stages a live collaboration between improvising human musicians and generative AI agents. Central to the work is a subversion of the established technique of the contrafact\, whereby new melodies are composed over pre-existing chord progressions. Here\, the process is inverted: AI agents are tasked with reharmonising composed melodic lines\, thereby disrupting the expected harmonic framework. This indeterminacy both encourages and challenges the performers to find new musical responses. \nLeveraging technologies including Somax2\, RAVE\, Mosaïque\, and Google MediaPipe within MaxMSP\, the system enables algorithmic agents to act as both collaborative and disruptive partners in the performance loop. These agents generate unexpected musical gestures and offer novel\, interactive visual and audible modalities that stimulate and provoke the performers. The result is an evolving musical language that emerges from the entangled dynamics of this extended network of human and machine improvisers. \nAbout the artist\nMark Whitlam has been a professional musician for 25 years\, having toured internationally with UK jazz luminaries including Andy Sheppard\, Iain Ballamy and Jason Rebello (Sting) and Mercury Prize Nominee Eliza Carthy. Recent collaborations have included work with Adrian Utley (Portishead) and Will Gregory (Goldfrapp). He has also collaborated with Mercury Prize His compositions and performances have received airplay on BBC radio 2\, 3 \,6 and Jazz FM\, with TV credits including HBO’s miniseries Industry. Mark teaches in the UK at Bath Spa University and BIMM University\, where he is a senior lecturer. He is mid-stage in his PhD in Composition at the University of Bristol\, UK\, exploring the affordances offered by generative AI agents in the liminal space between composition and improvisation. He also has a keen interest in the links between actor network theory and 4E cognition in the space of human-AI mediated music-making. \n\n 
URL:http://icmc2026.ligeti-zentrum.de/event/lunch-concert-2a/
LOCATION:Hamburg University of Technology\, Building I\, Audimax 2\, Denickestraße 22\, Hamburg\, 21073\, Germany
CATEGORIES:12-05,Concert,Music
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260512T190000
DTEND;TZID=Europe/Amsterdam:20260512T210000
DTSTAMP:20260423T153918
CREATED:20260421T170217Z
LAST-MODIFIED:20260422T114058Z
UID:10000219-1778612400-1778619600@icmc2026.ligeti-zentrum.de
SUMMARY:Evening Concert 2B
DESCRIPTION:This Evening Concert promises a special experience for both eyes and ears. At the center of this session is the saxophone\, performed by one of the most distinguished artists of our time: Hamburg-based saxophonist Asya Fateyeva. Together with her talented students\, she presents five works specially conceived for her and her instruments.\nThis instrumental focus is complemented by two striking video works\, presented on the specially installed video wall in the FEH\, which dissolve the boundaries between sonic and visual space. \n  \nProgram Overview\nAdaptive_Study#06 – Symbolic Structures Enhanced\nRiccardo Dapelo \nExpandiere \nChing Lam Chung \nSilent “human bird language” \nYongbing Dai and Yiping Bai \nPoetic Encounter with the digital shadow\nNicolas Kummert \nResonant Thresholds\nCecilia Suhr \nJamshid Jam \nJean-Francois and Charles Ramin Roshandel \n  \nAbout the pieces & the artists\nRiccardo Dapelo : Adaptive_Study#06 – Symbolic Structures Enhanced\nAdaptive_Study#06 – Symbolic Structures Enhanced (composed 2025)\, is the sixth work in a series of compositional studies initiated in 2015. Across the series\, the research has addressed all parameters of the musical work\, from its initial conceptual design to score realization\, from performance instructions to the algorithmic and compositional conception of the live electronics. The present study continues this trajectory\, focusing on the idea of a musical work whose temporal form is not fixed or closed in advance.\nThe primary artistic objective is to explore an adaptive musical form capable of responding to performer behavior while maintaining stylistic coherence. Rather than relying on predefined formal trajectories or stochastic indeterminacy\, the piece investigates adaptive processes grounded in symbolic structures\, memory\, and performer–system interaction.\nTo this end\, the work is conceived around several interrelated principles. The live electronics system continuously observes and analyses the performer’s actions\, extracting symbolic information related to pitch\, duration\, density\, and temporal grouping. These data are transcribed into fragments of symbolic notation\, which are stored\, transformed\, and recontextualised during performance. In this way\, the system simulates a form of temporal awareness\, operating on a short-term memory of what has been played and produced up to the present moment.\nA central concern of the piece is the control of musical density. Both at the micro structural and macrostructural levels\, the system adapts its behaviour in response to changing performance conditions\, modulating the accumulation\, suspension\, or release of events in order to avoid entropic saturation. The live electronics do not function as an autonomous generator\, but rather as a responsive musical partner whose actions become perceptible over extended time spans.\nThe work is interactive in a dialogical sense: neither the performer’s actions nor the system’s responses are fully predetermined. Instead\, musical form emerges from the ongoing negotiation between human and algorithmic agency. Symbolic structures serve as a shared medium through which this interaction unfolds\, allowing the electronics to operate not only on sound\, but on compositional representations. Given the complexity of these ambitions\, the piece is explicitly conceived as a study. This format allows experimentation through hypotheses\, testing\, and retro-diction\, acknowledging that artistic practice does not follow a strictly scientific method. As Paul Veyne observes\, the artwork—however rigorously conceived—ultimately resists definitive classification\, reflecting the variability and unpredictability inherent in human creative processes.\nThe piece is presented together with an accompanying paper that documents its conceptual and compositional development.\nLive electronics setup is described in the score.\nLive recording (earlier version): https://soundcloud.com/riccardodapelo/adaptive_study02 \nAbout the artist\nRiccardo Dapelo (b. 1962) studied composition with G. Manzoni and A. Vidolin. His work focuses on acoustic and electronic composition\, live electronics\, and interactive systems\, and has been performed internationally. He has published articles and lectured on voice analysis\, spatialisation\, philosophy of art\, and musical time. He collaborates with visual artists on interactive works and sound installations for museum and exhibition spaces. He teaches Composition at the Conservatory of Piacenza. \n  \nChing Lam Chung: Expandiere\nThis piece explores the different sound qualities of the baritone saxophone—from pitched materials to mechanical sounds—and its interaction with electronics\, thereby investigating the sonic hybridity between the instrument and electronic media. Both tape and live electronics are used: the fixed electronics allow sound objects to be precisely organized within the spatial environment\, while the live electronics serve as a bridge between the instrument and the fixed electronics\, enhancing their connections. \nThrough this approach\, the piece creates a unique sonic environment in which different sound objects interact and evolve with one another\, offering the audience a varied auditory experience in which the instrument and electronics fully merge. \nAbout the artist\nCHUNG Ching Lam\, Mavis (b. 05.06.2003)\, was born and raised in Hong Kong. Mavis currently studies Master music composition at Frankfurt University of Music and Performing Arts\, under the guidance of Orm Finnendahl and Ulrich Alexander Kreppein. \nMavis’s music thoughtfully explores timbre\, transforming ordinary sounds into unexpected auditory experiences. Her compositions discover the beauty of melancholy as she creates a unique sonic landscape that reflects her philosophy and experiences. \nShe received third prize in the 2nd NC Wong Young Composers Award and was chosen for the electroacoustic composition fellowship at the Delian Academy 2024. She also participated in the URTIcanti contemporary music festival and the Internationales Digitalkunst Festival. Furthermore\, she attended the South China Contemporary Creative Music Institute and has been selected for the Mixed Media category at the iISUONO Contemporary Music Week 2025. Her compositions have been performed in Greece\, Germany\, and Italy. \nShe studies Bachelor music composition at Hong Kong Baptist University\, under the guidance of Eugene Birman\, Camilo Mendez\, Stylianos Dimou and Ka Shu TAM. \n  \nYongbing Dai and Yiping Bai: Silent “human bird language” \nThis work\, composed for saxophone and electronic music\, uses the saxophone’s unique multiphonic harmonics\, distinctive timbre\, and various techniques such as tonguing to evoke an effect of ancient human “bird language\,” akin to “abstract writing” incomprehensible to modern humans. It uses this to question the constant self-destruction that occurs on our shared planet. We can consider this: we have entered the age of artificial intelligence\, with highly advanced science and technology. Yet\, even in this civilized context\, for their own benefit\, humans can disregard and kill their fellow human beings. This is utterly absurd and tragic. How is this different from the barbaric slaughter of ancient times? What is the significance of the development of human technology and civilization? \nAbout the artists\nDai Yongbing holds a doctorate in Electronic Music Composition from the Shanghai Conservatory of Music. He currently teaches electronic music at the Art and Technology Department of the Composition Department of Wuhan Conservatory of Music. He was sponsored to study composition and electronic music composition at the Royal Danish Academy of Music\, where he received a master’s degree in composition. In 2023\, he studied sound art at the University of Music and Drama in Munich\, Germany.In 2024\, he was sponsored by the European Union’s Erasmus program to study electronic music composition with Professor Karlheinz Essl at the University of Music and Drama in Vienna\, Austria. The electronic music work “Two Trembling Hearts” won the first prize at the Hangzhou International Electronic Music Festival. In June 2022\, he was selected for the academic class of computer music design and performance at the IRCAM-Manni-festival Music Festival at Pompidou in Paris\, France. His work “Two Worlds of Monks” won the first prize in the UPI-Sketch professional group at the 2022 Xenakis (CIX) Music Center in France. His wind band work “Non-Taoism” was premiered by the Shenzhen Symphony Orchestra. His works have been performed all over the world\, such as Munich and Düsseldorf in Germany\, Amsterdam in the Netherlands\, Vienna in Austria\, Lisbon in Portugal\, Copenhagen in Denmark\, New York in the United States\, Tokyo in Japan\, Seoul in South Korea. \n  \nNicolas Kummert: Poetic Encounter with the digital shadow\nThis proposal invites saxophonist Asya Fateyeva into an improvisatory performance that explores the encounter between acoustic virtuosity and real-time electronic transformation. The project centres on a live-electronics setup I have developed within artistic research contexts over several years—a system deliberately designed to be simple\, flexible\, affordable\, and fast to deploy. It requires only a close microphone (ideally the Vigamusictools Intramic)\, a small audio interface\, a laptop\, and three compact controllers. Its purpose is not to impose effects but to extend the sonic and expressive possibilities of the acoustic instrument while remaining transparent and highly responsive.\nThe concept is straightforward: the saxophone produces the primary musical material\, and I modulate that sound live through controlled timbral\, spectral\, and temporal transformations. The electronics behave as a reactive partner—what I call the performer’s digital shadow: a sonic counterpart that follows\, shapes\, questions\, or briefly detaches from the acoustic gesture. The identity of the acoustic sound remains fully audible\, while the electronic layer opens new directions within the improvisation.\nThe artistic foundations of this work draw on several research frameworks:\n• Improvisation as assemblage (after Deleuze): the performance is approached as a self-emergent system in which performers\, instruments\, digital processes\, acoustics\, and feedback relations act together to shape the form in real time. • Paulo de Assis’s Logic of Experimentation: the focus lies on what the instrument–electronics constellation can do when activated through exploratory performance\, rather than on pre-defined material. • Georgina Born’s theory of musical mediation: the setup foregrounds the interplay between acoustic sound\, digital transformation\, performer interaction\, and audience perception. • Laurent Cugny’s audiotactile perspective: the electronic layer functions as an extension of touch\, gesture\, and micro-timing rather than an external effect. The project treats improvisation as a co-embodied process that produces a hybrid sonic entity.\nMusically\, the performance is structured as a series of improvisatory episodes that examine different modes of relationship between acoustic and transformed sound: – subtle extensions of timbre and resonance; – interactive textures and rhythmical counterpoints between acoustic phrasing and electronic responses – sections where Asya’s sound is heavily transformed in real time\, while the unprocessed acoustic sound is replayed in the pauses of her playing\, blurring the audience’s visual-aural connection\, and questioning the musician’s immediate relationship to her own instrument.\nBecause the system is lightweight and adaptable\, the collaboration requires limited rehearsal and can be shaped around Asya’s musical language and preferred improvisational strategies. The format proposes an accessible but conceptually rigorous exploration of improvisation\, mediation\, and electronic augmentation. It offers the conference audience an accessible example of how simple\, flexible computer-music tools can generate rich musical dialogues and expand the expressive ecology of the acoustic instrument\, shedding new light on various aspects of improvisation.\nI propose to conclude the performance with a short discussion in which Asya can reflect on how the electronic shadow influenced musical decision-making\, interaction\, and perception—offering insight into the core research questions driving this work. \nAbout the artist\nNicolas Kummert (1979) is a Belgian saxophonist\, electronic artist\, composer and researcher known for his melodic sense\, openness and exploratory approach. He has recorded over 70 albums and performed worldwide with artists such as Lionel Loueke\, Jeff Ballard\, DRIFTER and many others. Active in hybrid acoustic–electronic projects\, film and dance music\, and interdisciplinary research\, he develops innovative modulation processes and collaborates across jazz\, poetry\, contemporary dance and African music. \n  \nJoe Wright: Cor Ddiglwed (Unhearing Chorus) \nCor Ddiglwed (unhearing chorus) takes inspiration from Daphne Oram’s ‘Bird of Parallax’\, and was developed with the one-of-a-kind\, Mini Oramics\, developed by Tom Richards based on Oram’s designs for a revised version of her pioneering graphical synthesis machine. \nIn the piece\, the author phrases/samples recorded with Oramics\, alongside field recordings taken locally to his home in South Wales and live processed saxophone which uses the instrument as input to a phase vocoder designed to mimic the writing / replaying / overwriting process that Mini Oramics facilitates. \nThe piece was written in the context of a highly divisive by-election in which local communities in South Wales saw a hot rise in populist sentiment\, and a rise in polarised rhetoric on and offline. While the technical inception of the piece draws heavily on Oram and the legacy of her synthesiser design\, the field recording process at this time highlighted the importance shapes and forms in captured human and animal voices – seen through an Oramics lens. The piece explores the idea of diverse clashing narrative threads in a fight for attention – as a metaphorical mirror to the author’s recordings of local dawn choruses. Both in the piece and the context of its composition\, these voices are\, despite their differences\, interconnected by common challenges and under-explored common ground\, yet are broadly unheard by others. \nThe piece forms part of a broader body of recent work that explores Oramics in the context of Oram and Iannis Xenakis’ work\, and the ways that their thinking and legacy can apply to contemporary musical composition\, instrument design\, and accessible musical tools and resources. \nAbout the artist\nJoe Wright is a musician and maker based in Cardiff\, with an interest in collaborative music making\, field recording\, accessible music technology/practice\, and creative code. As a saxophonist\, Joe is currently playing across the UK and Europe with jazz/contemporary music groups led by Rob Luft\, Corrie Dick\, and in FORJ. He also has a long-standing collaboration – Onin – with experimental musician\, James L Malone that explores unstable systems and atypical interactions. Recently\, Joe has been exploring field recording with a focus on his local natural spaces in South Wales. \n  \nCecilia Suhr: Resonant Thresholds\nResonant Thresholds explores the liminal space between human expression and technologically mediated sound. Structured around a fixed audio score\, the work unfolds as a slowly transforming audiovisual environment in which live violin performance interacts with real-time electronic processing. Noise\, resonance\, and breath-like textures blur distinctions between acoustic intimacy and digital vastness\, allowing the materiality of sound to become porous and unstable. Through structured live comprovisation (composed improvisation)\, the performer actively shapes the unfolding sonic landscape\, while the processed audio simultaneously generates an evolving visual score that functions as a symbolic translation of sound. The work invites listeners to inhabit a threshold between perception and imagination\, where meaning emerges through the continuous negotiation between composed structure\, live performance\, and technological extension. \nAbout the artist\nCecilia Suhr is an award-winning intermedia artist\, multimedia composer\, researcher\, author\, and multi-instrumentalist (violin\, cello\, voice\, piano\, bamboo flute). Her honors include the Pauline Oliveros Award (IAWM)\, a MacArthur Foundation DML Grant\, the American Prize (Honorable Mention)\, Global Music Awards\, Best of Competition from BEA\, among other distinctions. Her work has been presented at ICMC\, SEAMUS\, NYCEMF\, EMM\, SCI\, ACMC\, Mise-En\, MoXsonic\, and many more. She is a Full Professor at Miami University Regionals. \n  \nJean-Francois Charles and Ramin Roshandel: Jamshid Jam \nThe sonic dust of a country that has been burned to the ground several times over the centuries and yet has formed some of the most elaborate and highly sophisticated musical structures to have ever existed. According to Persian myths\, Jamshid\, who ruled during several centuries\, was responsible for inventions ranging from the manufacturing of weapons to the mining of jewels to the making of wine. He is also credited with the discovery of music. This is what brought the Jamshid Jam duet together: the search for music at the crossroads of the Radif tradition (Persian classical music) and the development of musical instruments such as the turntable and live electronics. \nAbout the artists\nRamin Roshandel grew up in a family surrounded by artists; his luthier dad\, his painter uncle\, and his setar instructor Farshid Jam had strong influences on him as a teenager. Ramin worked with the renowned Mohammad Reza Lotfi at Maktab-Khāne-ye Mirzā Abdollāh and won second place in the 7th National Youth Music Festival in Tehran\, Iran. As a composer\, Ramin Roshandel works with improvisatory structures to contrast or converge with non-tonal forms. \nJean-François Charles is Associate Professor of Composition and Digital Media at the University of Iowa. He creates at the crossroads of music and technology. As a clarinetist\, he has performed improvised music with artists ranging from Douglas Ewart to Gozo Yoshimasu. He worked with Karlheinz Stockhausen for the world premiere of Rechter Augenbrauentanz.\nRamin Roshandel & Jean-François Charles have worked on several projects together. Roshandel was the setār soloist for the premiere performances of Charles’ opera Grant Wood in Paris in 2019. They performed together as part of the live soundtrack composed by Charles and Nicolas Sidoroff to the 1923 Hunchback of Notre-Dame movie\, a commission by FilmScene with premiere performances in November 2023 in Iowa. In 2025\, they composed and performed a series of 13 concerts with the Red Cedar Chamber Music ensemble. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/concert-2b/
LOCATION:Friedrich-Ebert-Halle\, Alter Postweg 34\, Hamburg\, 21075\, Germany
CATEGORIES:12-05,Concert,Music
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260512T213000
DTEND;TZID=Europe/Amsterdam:20260512T233000
DTSTAMP:20260423T153918
CREATED:20260421T150351Z
LAST-MODIFIED:20260423T123138Z
UID:10000068-1778621400-1778628600@icmc2026.ligeti-zentrum.de
SUMMARY:Club Concert 2C
DESCRIPTION:Club Concert 2C invites you to an extraordinary sonic experience in the state-of-the-art Production Lab of the ligeti center. On a specialized 20.8-channel system\, international artists unfold immersive sound worlds ranging from physical gesture to complex AI analysis.\nExperience the synergy of historical depth and futuristic technology—an evening in which the audience quite literally immerses itself in sound. \n  \nProgram Overview\n\n\nDinosaur\, Glitched! \nFernando Lopez-Lezcano \nFause\, Fause\nJules Rawlinson \nLive ‘Shō’ Coding – Algorithmic Improvisation of Aitake Harmonies\nAtsushi Tadokoro \nQuiet Catastrophe Unleashed\nNicola Casetta \nAgain\nJulian Green \nPercepts (excerpt)\nDoron Klant Sadja \nCosmologies 3\nAaron Einbond \n\n\n  \nAbout the pieces & the artists\nFernando Lopez-Lezcano: Dinosaur\, Glitched!  \nThis is another ditty to add to the Dinosaur Songbook\, a music composition and performance project that started when the COVID pandemic kick-started a round of modular synthesizer building. This was a return to my roots\, as I started my discovery of electronic sound by designing and building modular synths from scratch in the late 70’s and early 80’s. \n“Carlitos” is the small Eurorack synth filled with modular goodies that will be used in this performance. It will be helped\, as has become the norm\, by the miniature Kastle\, probably the best birthday present ever\, and the smallest dinosaur I have in my herd. Carlitos houses an eclectic mix of analog\, digital and hybrid modules that has been evolving over several years and many concerts. \nThis round of noises comes courtesy of continued experiments coding in the Droid voltage processor computer language. One addition has been an implementation of Rob Hordijk’s Rungler circuit. This is a “low frequency” Rungler as the Droid is not fast enough to process voltages at audio rates\, and while it will never sound like the original\, it does provide a never-ending cornucopia of chaotic behaviors. As it is software\, many additional features were added\, in part to further confuse the performer who has even more knobs and controls to handle\, with the same brain power as before. Many other sources of sound make up the piece\, from complex oscillators with multiple feedback paths to fingers scratching a built-in microphone\, to an emulation of the Radio Music module with additional sampled voices. Various granular synthesis systems play a constant role in the sound universe of the piece. \nAs always all sounds are piped through a Linux computer running SooperLoopy\, a SuperCollider program written by the composer that spatializes sounds dynamically in realtime using HOA (High Order Ambisonics)\, and includes asynchronous loopers with a granular synthesis core that can sample\, replay and process more screaming dinosaur layers than you can count. \nAbout the artist\nFernando Lopez-Lezcano was given a choice of instruments when he was a kid and liked the piano best. His dad was an engineer and philosopher and his mother loved biology\, music and the arts. He studied both music and engineering\, and in his creative artistic work he tries to keep art and science chaotically balanced. He has been working at CCRMA since 1993 and throws computers\, software algorithms\, engineering and sound into a blender\, serving the result over many speakers. He can hack Linux for a living\, and sometimes he likes to pretend he can still play the piano. \nHe built El Dinosaurio (an analog modular synth) from scratch more than 40 years ago\, and it still sings its modular songs. He also loves to distill music from pure software and uses computer languages as scoring tools to carve music from text. He returned to realtime performances with an ever growing modular synthesizer herd\, including the original El Dinosaurio. He was the Edgard-Varèse Guest Professor at TU Berlin in 2008 and has been teaching the “Sound in Space” course at CCRMA for quite a while. He has also likes designing and building “things”\, including Ambisonics microphones (the SpHEAR project) and 3d sound diffusion spaces (the Listening Room and Stage systems at CCRMA\, and our “portable” GRAIL concert speaker array). \nHe feels happiest when playing music and making weird noises\, even better when playing with friends\, and even better on stage. \n  \nJules Rawlinson: Fause\, Fause\nFause\, Fause (c. 7mins) is one scene from an interactive audiovisual work that brings together different strands of creative computing\, sound design and composition. The work combines elements of game audio\, computer music\, traditional Scots folk song and highly detailed virtual landscapes to create an immersive songscape where the player traces the deconstructed ghosts of a song that features heavily processed fragments of the traditional ballad Fause\, Fause sung by Scottish music specialist Lori Watson. These fragments are dispersed throughout the virtual landscape using mixed approaches of fixed and indeterminate elements to create pathways of sound\, sound pathways as desire lines (Bandt 2006)\, encouraging exploration and reflection. The result is a series of speculative sonic narratives that re-sound space and place through what Hernandez (2017) describes as “psycho-sonic cartography”. The work reconsiders electroacoustic soundscape in an interactive medium\, bringing together compositional\, cultural and environmental considerations and makes use of creative applications of game-audio technologies for non-gaming purposes. The work will be performed by the composer across a multichannel audio system to highlight the spatial character and timbral qualities of the work. \nAbout the artist\nJules Rawlinson (1969) is an audio-visual composer and working in solo and collaborative settings\, and Programme Director for Sound Design at The University of Edinburgh Recent outputs make innovative use of archival material and corpus-based aesthetics of transformation across interactives\, performances and fixed media works. \n  \nAtsushi Tadokoro: Live ‘Shō’ Coding – Algorithmic Improvisation of Aitake Harmonies\n“Live ‘Shō’ Coding” is an experimental performance that merges the ancient tradition of Japanese Gagaku with contemporary live coding. The title is a play on the homophone between the Japanese instrument “shō” (笙) and the English word “Show.” This pun encapsulates the work’s core intent: to reveal the internal logic of a millennium-old instrument through the transparent medium of real-time programming. \nThe shō is a mouth organ consisting of seventeen bamboo pipes. Unlike Western instruments that often prioritize melody\, the shō is primarily harmonic\, characterized by “aitake” (合竹)—six-note tone clusters that function as static blocks of timbre. Originating from the Chinese “sheng” of the Tang Dynasty\, the Japanese shō has remained structurally unchanged for over 1\,200 years. It serves as a rare instance of “frozen” historical sound\, preserved by the rigid rituals of court music. \nTechnically\, the performance is realized through TidalCycles and SuperCollider. The sound is not pre-recorded but generated via real-time synthesis. Crucially\, the system employs Pythagorean tuning rather than modern equal temperament to replicate the instrument’s pure resonance and distinct intervals. Within this digital environment\, “aitake” clusters are defined as algorithmic patterns\, enabling the performer to improvise with ancient harmonies using computational precision. \nThe musical narrative follows an evolutionary arc from the archaic to the modern. The piece begins with a faithful algorithmic reconstruction of traditional Gagaku aesthetics—static\, sustained\, and serene. As the code evolves\, the strict definitions of the “aitake” are deconstructed through stochastic functions\, rhythmic displacements\, and spectral shifts. Consequently\, the organic textures of bamboo dissolve into digital artifacts\, transforming sacred harmony into abstract soundscapes. \nUltimately\, “Live ‘Shō’ Coding” challenges our perception of time. It juxtaposes the cyclic\, non-linear time of Gagaku with the discrete\, clock-based time of the CPU. By subjecting ancient sounds to modern syntax\, the work fosters a dialogue where the “breath of the phoenix” is reimagined through the binary logic of the machine. \nAbout the artist\nAtsushi Tadokoro\nHe is a live coder and creative coder exploring the boundaries of sound and visual art. He serves as an associate professor at Maebashi Institute of Technology and a part-time lecturer at Tokyo University of the Arts and Keio University. \nBorn in 1972\, he creates musical works through algorithmic sound synthesis and performs live improvisations with sound and visuals using a laptop. In recent years\, he has also produced and internationally exhibited numerous audio-visual installation works. \nHis work has been selected for major international conferences\, including the International Computer Music Conference (ICMC) in 2025\, 2024\, 2015\, and 1996; the International Conference on Live Coding (ICLC) in 2025\, 2024\, 2020\, 2019\, 2016\, and 2015; and New Interfaces for Musical Expression (NIME) in 2016. \nHe teaches various courses on creative coding at the university level. His lecture materials\, publicly available on his website (https://yoppa.org/)\, serve as a valuable resource for numerous students and creators. \nHe is the author of several books\, including Beyond Interaction: A Practical Guide to openFrameworks for Creative Coding (BNN\, 2020)\, Performative Programming: The Art and Practice of Live Coding – Show Us Your Screens (BNN\, 2018)\, and An Introduction to Creative Coding with Processing: Creative Expression Through Code (Gijutsu-Hyohron\, 2017). \n  \nNicola Casetta: Quiet Catastrophe Unleashed\nQuiet Catastrophe Unleashed is a performance for solo live electronics based on an eight- channel dynamic feedback system. Informed by Stephen Wolfram’s notion that simple iterative rules can generate irreducible complexity\, the work investigates how minimal operations— modulated delays\, adaptive limiting\, nonlinear distortion\, and continuously evolving chaotic equations—produce sonic forms that cannot be predicted or reduced to their initial conditions. The system is activated by a single impulse and evolves through recursive transformations that amplify micro-instabilities into shifting textures and emergent structures. These processes resonate with Deleuze’s conception of becoming: sound as a field of continuous variation rather than a fixed object. The performer navigates this unstable environment in real time\, engaging with a machine whose behavior unfolds at the intersection of determinism and contingency. Quiet Catastrophe Unleashed operates on the edge of chaos\, where sonic order arises through the continual negotiation of instability. \nAbout the artist\nNicola Casetta is a computer musician\, live electronics performer\, and scholar. His work explores sound as a network of relationships—a complex\, interconnected phenomenon that unfolds in an immersive and inclusive way. Through live electronics\, he creates music that captures the essence of the here and now\, embracing spontaneity and the vitality of the moment. He uses sound as a medium to investigate new ways of interacting with both the environment and society\, creating spaces for reflection and transformation. His music has been perfomed at To listen To in Tourin (IT)\, SAG in Leicester (UK)\, CNMAT (Berkeley)\, Angelica Festival Bologna\, Festiva di Nuova Consonanza Roma (IT)\, Borealis in Bergen (NO)\, Festival DME in Lisbon (PT)\, Festival Zeit fur Neue Musik in Rockhenhausen (DE)\, Manifeste Ircam in Paris\, Ma/In in Matera (IT)\, 8th FKL Symposium(IT) \, NYCEMF\, ICMC in Athens (GR)\, XX CIM in Rome (IT)\, SoundKitchen (UK)\, Sweet Thunder Festival of Electro-Acoustic Music in San Francisco (US)\, UCSD Music – CPMC Theathre in San Diego (US) and Premio Phonologia in Milan among others. \n  \nJulian Green: Again\nAgain is a live electroacoustic performance structured as a stream of consciousness\, in which repeated physical gestures function as both material and form. The performer cycles through a limited set of recurring actions intended to “cradle” a fleeting\, beautiful moment; over time\, this repetition shifts from preservation toward compulsion\, foregrounding the tension between holding on and letting go. These gestural loops accumulate and cross thresholds that trigger new sonic layers\, including processed vocal statements\, musical textures\, and environmental sound events. Rather than presenting discrete movements\, the work unfolds through gradual intensification and release\, emphasizing how replay can simultaneously comfort and erode\, as memory morphs with each return. \nIn the latter portion of the performance\, a recorded spoken message introduces an explicit reflective frame\, calling for interpersonal awareness of desire and a move away from reliance on possessions in recognition of life’s ephemerality. Again uses repetition as a performative engine to examine attachment\, impermanence\, and the unstable fidelity of remembrance. \nProgram Notes: \npast lives Again. Lost\, but love lingers lackadaisically through lumbering leaps within another. Foregone are the chains that bind our sense of reason towards another hopeful realization into an unresolved calling. Gone are the worries of the mind that haunts our humanity to bind to desires towards our sense of self\, compressed within a fragment of our lifespan. Only to one day meet the people we cherished deeply\, degrading our memories\, morphing in and out of consciousness within every trickle of sorrow that sheds our being before returning to our \nAbout the artist\nJulian Green is a U.S.-based electroacoustic composer and performer focused on data-driven instruments and live electronics. He has participated in Hypercube Ensemble’s Cubelab workshop\, with works performed and recorded in the U.S. and internationally\, including Sonic Apparitions (Duino\, Italy). Notable works include Sound Waits\, Cherish the Space\, My Festering Synapses\, An Indeterminate Schism\, and We Don’t Unknow. His piece The Inconsistent Continuities was professionally recorded for Hypercube Ensemble and commissioned for the Kingler Electroacoustic Residency (KEAR) at Bowling Green State University. Recent projects include Breakthroughs (Wacom tablet)\, Again (GameTrak controller)\, and If We Could Forget It Gently Together: Vestige Series (custom 3D-printed gyro controller)\, realized at the University of Oregon. Green holds a BM in composition from Arkansas State University and an MM from Bowling Green State University\, and is pursuing a doctorate at the University of Oregon. Influences include Denis Smalley\, Michel Chion\, Trevor Wishart\, Hildegard Westerkamp\, Ryuichi Sakamoto\, and Elaine Lillios. \n  \nAaron einbond: Cosmologies 3\nCosmologies 3 situates the listener inside a virtual grand piano to experience its secret inner life. The piano interior\, recorded with a spherical microphone array\, is complemented by three-dimensional (3-D) field recordings of Paris’s Place Igor Stravinsky. These recordings are highlighted and underlined with computer synthesis using artificial intelligence (AI) to reproduce the spatial presence of acoustic instruments\, while the microcosm of the piano’s inner space expands larger-than-life. \nCosmologies 3 is part of a modular series of works that use AI to inform sound spatialization. The situated spatial presence of musical instruments has been well studied in the fields of acoustics and music perception research\, but so far has not been the focus of research on AI and music. Cosmologies seeks to “re-embody” recorded sound using data derived from natural acoustic phenomena in an immersive sonic environment where real and virtual sources blend seamlessly. Cosmologies 3 for Ambisonic fixed media may be performed on its own or directly following Cosmologies for piano and 3-D electronics\, with the fixed media work beginning as the live performer leaves the stage. Although the human–AI interaction in the fixed work is no longer live\, it remains as a trace of the work’s creation process\, refracting the human performer’s presence behind the spatial audio recordings (see Fig. 1). \nCosmologies is among the first works to connect audio descriptor analysis and corpus-based syn- thesis to 3-D spatialization using Higher-Order Ambisonics (HOA) and machine learning (ML). At the same time\, it is the first project connecting the computer programs Max\, Python\, and OM# (Bresson et al. 2017) with the associated packages Spat (Carpentier 2018) and Mubu (Schnell et al. 2009). These software tools are used to draw upon natural acoustic phenomena as source material for spatial sound derived from two sources: one is a 3-D microphone array\, the EM32 Eigenmike by mh acoustics (https://mhacoustics.com/products)\, a 32-channel array used to capture 3-D piano samples as well as ambient field recordings. The other source is generative spatial sound synthesis produced through ML of an existing large database of radiation measurements for acoustic instruments (Shabtai et al. 2017; Weinzierl et al. 2017). This database serves as a training set for ML models to control spatially rich 3-D patterns for electronic synthesis. These two sources of spatial sound are intentionally overlapped and fused so the listener cannot easily distinguish or segregate the sources. The aesthetic goal is to create a setting for curious and detailed listening\, where one may not discern the “sleight of hand” between the superposed 3-D spaces of the sample recordings and computer synthesis. \nAbout the artist\nAaron Einbond’s work explores the intersection of instrumental music\, field recording\, sound installation\, and interactive technology. He released portrait albums Cosmologies with the Riot Ensemble\, Without Words with Ensemble Dal Niente\, and Cities with Yarn/Wire and Matilde Meireles. His awards include a Giga-Hertz Förderpreis\, a Guggenheim Fellowship\, and artistic-research residencies at IRCAM and ZKM. He teaches music composition and technology at City St George’s\, University of London. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/club-concert-2c/
LOCATION:ligeti center\, Production Lab (10th floor)\, Veritaskai 1\, Hamburg\, 21079\, Germany
CATEGORIES:12-05,Club Concert,Music
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260513T110000
DTEND;TZID=Europe/Amsterdam:20260513T173000
DTSTAMP:20260423T153918
CREATED:20260421T182305Z
LAST-MODIFIED:20260421T190140Z
UID:10000186-1778670000-1778693400@icmc2026.ligeti-zentrum.de
SUMMARY:Listening Room 2
DESCRIPTION:Crown Shyness\nJeonghun Hyun \nEntomology#2\nThanos Polymeneas-Liontiris \nJetlag – Time Difference\nRay Tsai \nLazy whirls of glow\nJuan J.G. Escudero \nPumma\nEmilio Casaburi \nQuivering Silk\nYi-Hsien Chen \nSonic Echoes of Ink\nPingting Xiao \nThe Luminosity of the Yugen Mist\nXiaoyu Su \nVocalise\nPak Hei Leung \n 
URL:http://icmc2026.ligeti-zentrum.de/event/listening-room-2-3/
LOCATION:Hamburg University of Technology\, Building A (A 0.14)\, Am Schwarzenberg-Campus 1\, Hamburg\, 21073\, Germany
CATEGORIES:13-05,Listening Room,Music
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260513T110000
DTEND;TZID=Europe/Amsterdam:20260513T173000
DTSTAMP:20260423T153918
CREATED:20260421T185112Z
LAST-MODIFIED:20260421T185112Z
UID:10000181-1778670000-1778693400@icmc2026.ligeti-zentrum.de
SUMMARY:Listening Room 1
DESCRIPTION:Perseverance: An Artist Rendering\nMikel Kuehn \nThe Archival of Memory in Skin\nJoan Tan \n#paris\nTaito Fushimi \nAsymmetric Stamina\nAndreas Weixler \nCHAOTIC ITINERANCY\nWonseok Choi \nCorrosion Chamber\nHector Bravo Benard \nDew\nTom Bañados Russell \nFully Automated Luxury Music (selected tracks)\nFelipe Tovar-Henao \nLein\nKim Hedås \nOn the transparency of seeing through\nSean Peuquet \nThe Eternalist Paradox\nJuan Carlos Vasquez \nUnwritten Glow\nWen-Chia Lien \nVox Die\nTomás Koljatic S. \nWhale Song Stranding\nDavid Nguyen \nWhere am I in the Universe?\nHanae Azuma \nDroplet\nJong Gyun Kim
URL:http://icmc2026.ligeti-zentrum.de/event/listening-room-1-3/
LOCATION:Hamburg University of Technology\, Building A (A 0.18)\, Am Schwarzenberg-Campus 1\, Hamburg\, 21073\, Germany
CATEGORIES:13-05,Listening Room,Music
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260513T133000
DTEND;TZID=Europe/Amsterdam:20260513T153000
DTSTAMP:20260423T153918
CREATED:20260421T161440Z
LAST-MODIFIED:20260422T115641Z
UID:10000085-1778679000-1778686200@icmc2026.ligeti-zentrum.de
SUMMARY:Lunch Concert 3A
DESCRIPTION:Concert 3A offers a fascinating stage for the Steinway Spirio—the world’s most advanced self-playing piano system. In this session\, the piano is taken far beyond its traditional role: it acts as an autonomous performer\, a controller\, and even an interface for human brain activity. \n  \nProgram Overview\nElevator Pitch\nJuan Vassallo \nChant\nYoonjae Choi \nMulholland Revisited \nHeloise Garry \n“Empathic Machines” for One Pianist’s Mind and Disklavier™\nMasatsune Yoshio \nVoici que la saison décline\nMikako Mizuno \nExplode to Survive \nRichard Scott \n  \nAbout the pieces & artists\nJuan Vassallo: Elevator Pitch\nPhilosopher Hartmut Rosa suggests that our society is characterized by acceleration due to rapid technological advancements\, leading to constant time shortages. As we adapt to quick updates via smartphones and social media\, communication becomes faster and more fragmented\, favoring brief\, direct forms like the elevator pitch. An elevator pitch is a short summary speech meant to convey ideas or products within the duration of an elevator ride. It is aimed at being clear and persuasive to a wide audience.\nIn politics\, new communication techniques exploit these brief\, impactful messages\, often oversimplifying complex issues and lacking depth. Such strategies have been criticized for manipulating public opinion and stirring emotions\, leading to biased and divisive rhetoric that can aid authoritarian or intolerant movements.\nThe piece poses an artistic focus on these contemporary methods of communication -such as an elevator pitch- and the potential for manipulation of sound-bite content by political figures. The piece thus is a sardonic analogy to a political speech\, which is portrayed here as empty of substance\, and as a construct derived from a carefully crafted algorithmic rhetoric\, and the sonification of spoken phrases. Additionally\, nonsensical political speeches synthesized through commercial text-to-speech systems are used as sound material for the electronics. \nAbout the artist\nJuan Sebastián Vassallo is an Argentinian composer and live-electronics performer based in Bergen\, Norway. He holds a Ph.D. in Artistic Research from the University of Bergen. His artistic research explores human–computer interaction in art creation\, at the intersection of computer-assisted composition\, artificial intelligence\, algorithmic poetry\, generative visuals\, and live electronics. \nHis music has been performed internationally by ensembles and soloists including Projecto RED (Argentina)\, Quasar Saxophone Quartet (Canada)\, Hinge Quartet (USA)\, Vocal Ensemble Tabula Rasa (Norway)\, Edvard Grieg Kor (Norway)\, JÓR Saxophone Quartet (Scandinavia)\, Zone Experimental Basel (Switzerland)\, and Lucas Fels (Germany)\, among others. \nHis work has received multiple awards\, including first prize at the AI-based composition contest at the IEEE Conference on Big Data (Washington\, D.C.) for Oscillations (iii). Other distinctions include selections and awards from the National Endowment for the Arts (Argentina)\, ISCM/Chengdu River Sun Prize (China)\, and several contemporary art competitions. \nHe has received international grants from UNESCO-Aschberg and the Organization of Ibero-American States (IBERMÚSICAS)\, supporting artistic residencies in the United States. His practice is strongly collaborative and interdisciplinary\, and alongside his experimental work\, he maintains an active career as a tango pianist and arranger. \n  \nYoonjae Choi: Chant\nChant is a live electronic work that transforms the cello through vowel-based formant processing\, creating a hybrid vocal–instrumental language reminiscent of primordial voice. As part of a broader research project on real-time live electronics formant synthesis\, the piece explores how electronic modulation can expand instrumental identity and shape emotive\, multi-voiced textures. \nAbout the artist\nYoonjae Choi is a South Korean composer whose work explores the musical potential of extended tones and spectral qualities drawn from both traditional instruments and non-instrumental materials. His compositional practice focuses on integrating acoustic sound with live electronics\, soundscapes\, and computer-based technologies. He frequently collaborates across media arts and experimental music disciplines. \nHe studied with Richard Dudas at Hanyang University and with John Gibson and Chi Wang at Indiana University. He is currently pursuing a doctoral degree in composition at the University of North Texas\, studying with Panayiotis Kokoras. His music and research have been featured at international conferences and festivals. \n  \nHeloise Garry: Mulholland Revisited\nMulholland Revisited is an interactive composition for Yamaha Disklavier / MIDI\nkeyboard and ChucK\, integrating real-time interaction between acoustic and\nelectronic elements. By leveraging MIDI input\, the piece enables the piano to\nfunction as both a performer and a controller\, triggering ChucK-generated sound\ntextures in response to live performance. \nInspired by a pivotal phone conversation in Mulholland Drive (Lynch\, 2001)\, the\nwork explores the blurred boundary between dream and reality through a dynamic\ninterplay between piano-generated material and algorithmic sound synthesis. The\nelectronic elements emerge as an extension of the piano’s acoustic voice\,\nreinforcing the psychological tension that defines the narrative arc. An homage to\nDavid Lynch\, the piece mirrors his fascination with fractured identities and surreal\natmospheres\, immersing the listener in a sonic landscape that expands the piano’s\ntraditional interface into new musical and narrative dimensions. \nAbout the artist\nHéloïse Garry is an artist working at the intersection of filmmaking\, theater\, and performance\, exploring the aesthetics of totality across art forms. Her compositions reflect a deep interest in cross-cultural and linguistic experimentation\, and sonic storytelling. Her work has been presented at ICMC\, NIME\, NYCEMF\, ICAD\, Audio Mostly\, the Audio Engineering Society\, and the Internet Archive. As a Yenching Scholar at Peking University\, she researched the politics of independent Chinese cinema and the role of music in the films of Jia Zhangke. An artist-in-residence at Gray Area and the Mozilla Foundation in San Francisco\, she has collaborated with IRCAM and the Columbia Computer Music Center\, and explored the sonification of the universe under the mentorship of physicist Brian Greene. In September 2024\, she joined Stanford’s Center for Computer Research in Music and Acoustics (CCRMA)\, where she studies with Mark Applebaum\, Paul DeMarinis\, and Ge Wang. Héloïse holds bachelor’s degrees in Filmmaking\, Economics\, and Philosophy from Columbia University\, Sciences Po\, and Sorbonne University. \n  \nMasatsune Yoshio: Empathic Machines\nWhat lies beyond the pianist’s technical skill — music in which body and mind are fully integrated?\nIn this work\, a pianist’s brainwaves are sensed using the EMOTIV Insight device\, and the data is processed in Max 9 to generate performance information that is transmitted and played by a Disklavier™ piano.\nThrough this body-extended expression\, the resulting piano music — beyond human hand alone — becomes a speculative answer to the question posed above. \nAbout the artist\nMasatsune Yoshio (1972- ) was born in Kobe. He is a composer and Media Master No. 75. His specialty is the composition of fine art pieces using computers and the compositions are based on the creation of and research regarding algorithmic compositions\, acoustic synthesizing\, live electronics\, and expression with information technologies. His electroacoustic pieces were performed within and outside of Japan. He is an associate professor at Showa University of Music. \n  \nMikako Mizuno: Voici que la saison décline\, for clarinet and electronics\nThe electronic part of this piece comprises sound files containing grains of different pitches and sizes\, all of which are derived from clarinet performance. These grains are placed in the field by spat. program and diffused through a cube-shaped multi-channel system. The subscribed version is rendered into four channels. The solo clarinet is required to produce special tone colours using multiphonic techniques\, breath tones\, harmonic colour trills\, etc. The subtle timbre of the instrument connects the minute changes in visual colours and the passing of time\, which were depicted in a poem by Victor Hugo.\nThe title of this piece comes from one of Hugo’s poems. At the end of summer\, the season seamlessly transitions to autumn. The bright blue sky turns grey\, the birds shiver and the grass feels cold. I tried to create sounds that reflect these slight changes and delicate nuances.\nThe clarinet’s multiphonic sound is enhanced by harmonised breath tones. The harmonisation\, realized by special signal processing\, involves not only layered pitches\, but also the filtering of noisy long breaths. In the performance\, especially in the latter half of the piece\, Max for Live is necessary to certify the effective interactive ensemble between the clarinet player and the electronic part\, which must fulfil the notated musical ensemble. The instrumentalist can play the piece according to the usual musical notation\, because some notated guides in the electronic part show the tempo and the nuance of phrase for the musician\, which are often the case in the latter half of this piece. The instrumentalist is sometimes demanded to catch the electronic un-pitched noisy sounds during the fermata or the rest. \nAbout the artist\nComposer/Musicologist. Mainly active in Japan\, her music has been heard in many places including France Germany\,Austria\, Hungary\, Italy\, Republic of Moldova\, and international festivals and conferences such as ISEA\, ISCM\, EMS\, Musicacoustica\, WOCMAT\, NIME\, ICMC\, NYCEMF. Her pieces range from orchestra\, chamber music\, vocal ensemble\, traditional Japanese instruments (sho\, koto\, shakuhachi\, no-flute\, biwa etc.) to networked remote performance through ipv6. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/lunch-concert-3a/
LOCATION:Hamburg University of Technology\, Building I\, Audimax 2\, Denickestraße 22\, Hamburg\, 21073\, Germany
CATEGORIES:13-05,Concert,Music
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260513T193000
DTEND;TZID=Europe/Amsterdam:20260513T210000
DTSTAMP:20260423T153918
CREATED:20260415T121938Z
LAST-MODIFIED:20260421T201129Z
UID:10000121-1778700600-1778706000@icmc2026.ligeti-zentrum.de
SUMMARY:[Off-ICMC] Concert | Florentin Ginot: "Disturbance"
DESCRIPTION:Photo: Florentin Ginot\n  \n“Disturbance” is an audiovisual solo performance that blends elements of concert\, video art\, and theater. With his double bass and analog synthesizers\, Florentin Ginot invites the audience on a live nocturnal journey. Past and present collide with ghostly glitches and pulsating electronic rhythms.  \nregistration required here \n  \nThe Off-ICMC\nMusic is what brings us together\, even when everything else pulls us apart.\nMusic everywhere—it is part of our everyday lives. And yet\, we’re hearing it performed live on analog instruments less and less. Instead\, it often reaches us through speakers or headphones\, as files\, from the cloud. What does music mean to you? What does it sound like today? Where does it begin—and where does it end?\nThe ligeti center invites you to listen more closely and discover new sounds—to explore\, experiment\, and play. This year\, ICMC HAMBURG 2026 revives an old tradition: the Off-ICMC\, a free and accompanying festival curated for the general public and anyone curious about computer music. \nAll Off-ICMC events are free of charge.  \n\n  \n \n 
URL:http://icmc2026.ligeti-zentrum.de/event/off-icmc-concert-florentin-ginot-disturbance/
LOCATION:Stellwerk Hamburg\, Hannoversche Straße 85\, Hamburg\, 21079\, Germany
CATEGORIES:13-05,Concert,Music,Off-ICMC
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260513T213000
DTEND;TZID=Europe/Amsterdam:20260513T233000
DTSTAMP:20260423T153918
CREATED:20260421T162148Z
LAST-MODIFIED:20260422T121025Z
UID:10000088-1778707800-1778715000@icmc2026.ligeti-zentrum.de
SUMMARY:Club Concert 3C
DESCRIPTION:Concert 3C is an exploration of the boundaries of collective improvisation and creative technology. The SPIIC Ensemble of the HfMT Hamburg presents a program in which the audience has a say\, algorithms extend historical works\, and artificial intelligence reinterprets human movement as a “hallucination.”\nIn the industrial atmosphere of the Speicher am Kaufhauskanal\, acoustic instruments merge with live coding\, neural synthesis\, and interactive notation. \n  \nProgram Overview\nLiquid tensioning\nFernando Egido \nSinophony for Clarence\nJuan Arturo Parra Cancino \nChimerique\nJonathan Wilson \nNEBULA\nEnrique Tomás and Moisés Horta Valenzuela \nplastique\nSe-Lien Chuang and Andreas Weixler \nShamanic Protocol\nOscar Corpo \nA Walk in Polygon Field\nRob Canning \nDEPRECATED\nDenis Polec Vocal \n  \nAbout the pieces & artists\nFernando Egido: Liquid tensioning\nLiquid Tensioning is a work for violin and double clarinet\, live notation\, live generative system\, live electronics\, and attendees’ participation (category: Improvised work for ensemble and electronics (SPIIC+ Ensemble)). Liquid tensioning is a Collaborative and interactive work in which the work is real time created by the self-evaluation of the work. The attendees will evaluate the work via a web app\, and the musical generative system will change according to the evaluation in real time. The Musicians will receive notes via a live notation system on their mobile phones. The title of the works refers to the model of tensioning provided by the generative system based on a musical tensioning that is not related to the properties of the musical material. This work belongs to a series of works in which the composer creates a self-referential musical generative system based on the real-time evaluation of the work. The main musical material of this work is its evaluation. The work duration is about 10 minutes. \nAbout the artist\nHe studied composition with José Luis de Delás at the School of Music of the University of Alcalá de Henares and received musical training in workshops with composers\, analysts\, and interpreters around the LIEM or the GCAC. He studied Computer Music with Emiliano del Cerro.\nHe has published several papers at international conferences.\nHis works have been performed at festivals such as ICMC 2025-2024-2023\, Bled international festival\, SMC Conference Graz\, Convergence Festival\, Ars electronica Linz\, Atemporánea Festival\, AIMC 2022 conference\, EVO 2021\, OUA Electroacoustic Music Festival 2020\, ISMIR 2020 in Montreal. The Seoul International Electroacoustic Music Festival 2019\, the ACMC 2019 conference in Melbourne\, SID 2015 conference in New York\, Venice Vending Machine III\, the New York City Electroacoustic Music Festival\, JIEN in the Auditory 400\, La hora acúsmatica\, SMASH Festival\, Encontres Festival in Palma of Majorca\, and ACA. \n  \nJuan Arturo Parra Cancino: Sinophony for Clarence\nSynophonie for Clarence is an ensemble and live electronics work inspired by the formal and sonic principles of Clarence Barlow’s Sinophony I (1970)\, his first electronic composition. Rather than functioning as an arrangement or transcription\, this piece operates as an instrumental extension of Barlow’s electronic sound world\, translating and reactivating its core materials through acoustic performance and real-time electronic processes. \nThe work seeks to bring into the physical space of performance elements that\, in Sinophony I\, exist only in fixed media: continuous tones\, slow harmonic transformations\, beating frequencies\, and the perceptual tension between purity and instability. These characteristics are reimagined here as a living\, performative situation\, where instrumental sound and electronics merge into a single\, evolving spectral body. \nSynophonie for Clarence builds on methods developed by Juan Parra Cancino to extract performative salients from early electronic works—elements that can be embodied\, negotiated\, and reshaped by performers in real time. Through this approach\, the piece revisits historical electronic material not as an object to be preserved unchanged\, but as a dynamic field for exploration\, experimentation\, and renewed artistic engagement. The aim is not reconstruction\, but continuation: to recover underlying processes and extend their implications into contemporary performance practice. \nBy situating acoustic instruments\, live electronics\, and spatialized sound within a shared listening ecology\, the work foregrounds collective tuning\, timbral fusion\, and emergent beating phenomena as central musical forces. The ensemble functions less as a group of independent voices than as a composite oscillator\, shaped by subtle interactions and shared attention. \nThis piece is conceived as a tribute to Clarence Barlow—composer\, educator\, and friend—honoring both his pioneering contributions to electronic music and his enduring influence on ways of thinking about sound\, structure\, and musical intelligence. \nAbout the artist\nJuan Parra Cancino studied Composition at the Catholic University of Chile and Sonology at the Royal Conservatoire The Hague\, where he completed a Master’s degree in electronic music. He received a PhD from Leiden University in 2014 on performance practice in computer music. A guitarist trained in Robert Fripp’s Guitar Craft\, he has worked extensively in live electronics. He is a researcher at the Orpheus Institute and Regional Director for Europe of the International Computer Music Association (2022–26). \n  \nJonathan Wilson: Chimerique\n“Chimerique” is about the interaction of music and language. Written and premiered in 2017\, this composition is for an ensemble featuring improvisation\, narration\, and electronics. It was realized in a collaboration with poet and translator Patricia Hartland by incorporating her English translation of “Ravines of Early Morning” by Raphael Confiant into a musical setting. The title is taken from a word in this text. It is French for “chimerical\,” and it can be defined as 1: something that takes delight in illusions\, or 2: something that is utopian\, or unreal. The narrator forms associations with this word through various phrases and passages that relate to the part of the story in which the description of “chimerique” is elaborated. Throughout this performance\, the performers listen and react to the text spoken by the narrator (and electronics). They are accompanied by electronics that consist of fixed media and live electronics from two different patches in Max/MSP using additive synthesis and granular synthesis. The musical instruments are the source material for granular synthesis. The score for this composition uses hybrid musical notation with some traditional notation for pitch and some graphic notation that leads performers subsequently to interpret not only the spoken phrases\, but also the graphic notation in their parts to determine volume\, pitch\, rhythm\, articulation\, and contour\, thereby making improvisation a necessity. The narrator and performers work together to generate a spontaneously formed through-composed work that marries text and music. The form can be described as through-composed in six sections. In the first section the performers respond only to a single phrase. In sections 2-6 the performers respond not only to phrases that delineate each section but also respond to extended narration shifting from descriptions of dreams\, the night\, madness\, illusions\, and at the end the act of dreaming itself. \nAbout the artist\nDr. Jonathan Wilson’s works have been performed at the Ann Arbor Film Festival\, European Media Art Festival\, ICMC\, SICMF\, SEAMUS\, NYCEMF\, MUSELAB\, NSEME\, Napoleon Electronic Music Festival\, Iowa Music Teachers Association State Conference\, and Midwest Composers Symposium. He is the winner of the 2014 Iowa Music Teachers Association Composition Competition. Jonathan has studied composition with Lawrence Fritts\, Josh Levine\, David Gompper\, James Romig\, James Caldwell\, Paul Paccione\, and John Cooper. In addition\, studies in conducting have been taken under Richard Hughey and Mike Fansler. Jonathan is a member of Society of Composers\, Inc.\, SEAMUS\, ICMA\, and the Iowa Composers Forum. \n  \nEnrique Tomás and Moisés Horta Valenzuela: NEBULA\nArtists working with deep-learning audio models often find that exploring their high-dimensional latent spaces requires chance-based\, combinatorial\, or technically complex machine-learning techniques. While these approaches can reveal unexpected possibilities\, they also make it more difficult to deliberately guide the models toward outcomes that are musically meaningful or aligned with specific creative intentions. \nIn this improvisation for solo instrument and two performers on live electronics\, we present an alternative approach to create a more interpretable and musically guided latent space exploration. This approach leverages Principal Component Analysis (PCA) applied to pre-encoded RAVE (Realtime Audio Variational Autoencoder) representations to reorganize the latent data into clusters that can be navigated more deliberately in performance. PCA reorganizes the encoded data into clusters based on shared timbral characteristics\, producing data clouds directly connected to the sonic properties of the source material. By structuring access to the latent space in this way\, our method bridges the gap between open-ended exploration and purposeful control\, offering performers a clearer and more intuitive means of shaping sound. \nTo prepare the improvisation\, and prior to the concert\, the solo instrumentalist provides an eight-minute recording that defines the sonic domain of the performance. This recording is encoded and analyzed\, restricting exploration to regions of the latent space shaped by the performer’s own material and giving the electronic musicians a more focused and musically coherent landscape to navigate. During the live performance\, the solo instrumentalist and the two electronic performers interact within this PCA-organized timbral map. Their trajectories through the latent space—along with the evolving clusters and sonic transformations—are projected in real time\, allowing the audience to see how latent-space navigation corresponds to audible change. \nThe musical materials resulting from this setup combine structured instrumental improvisation with electronically generated textures derived from latent-space navigation. While the overall form is left to real-time decisions between the soloist and the live performers\, the resulting sound world often alternates between rhythmically driven motifs—loosely recalling the interactive dynamics of small jazz ensembles—and more abstract electronic layers shaped through PCA-guided trajectories. These electronic textures\, produced by traversing clustered regions of the latent space\, serve as harmonically and timbrally evolving fields against which the soloist can articulate phrasing\, gesture\, and dynamic contour. The custom-built performance interfaces allow the electronic performers to shape these materials with precision\, enabling a responsive interplay in which acoustic action and machine-learned transformations continually inform one another. \nAbout the artists\nEnrique Tomás (*1981) is a sound artist\, researcher and assistant professor at the Tangible Music Lab who dedicates his time to finding new ways of expression and play with sound\, art and technology. His work explores the intersection between sound art\, computer music\, locative media and human-machine interaction.\nAs an individual artist\, Tomás’ activity is centered around ultranoise.es and focuses on performances and installations with extreme and immersive sounds and environments. He has exhibited and performed in spaces of Ars Electronica\, Sonar\, CTM\, IRCAM\, IEM\, KUMU\, SMAK\, NOVARS\, STEIM\, Steirischer Herbst\, Alte Schmiede\, etc.\, and in galleries and institutions throughout Europe and Latin America. \nMoisés Horta Valenzuela is a self-taught sound artist\, technologist\, musician\, and researcher from Tijuana\, Mexico\, based in Berlin. His work spans computer music\, neural audio synthesis\, conversational AI\, and the politics of emerging technologies\, approached through a critical lens that connects ancestral knowledge with contemporary digital culture. He has presented work internationally at Ars Electronica\, NeurIPS ML for Creativity & Design\, MUTEK México\, MUTEK AI Art Lab Montréal\, Transart Festival\, CTM Festival\, Elektron Musik Studion\, and the Sound and Music Computing Conference\, among others. \n  \nSe-Lien Chuang and Andreas Weixler: plastique\ninteractive audiovisual comprovisation for e-quitar\, green leaves & i-hands – GLISS – Green Leaves Imaginary Scenic Score\nDuration: ca. 8 min \nAbout the artists\nAndreas Weixler\, born 1963 in Graz\, Austria\, is a composer for computer music with an emphasis in\nintermedia realtime processing. He is teaching at the mdw Vienna\, InterfaceCulture in Linz and serves associate university professor at the CMS – computer music studio of Anton Bruckner\nUniversity in Linz where he initiated the intermedia concert hall the Sonic Lab.\nStudies of contemporary composition at KUG in Graz\, Austria with diploma by\nBeat Furrer\, completed by international projects and residencies. \nSe-Lien Chuang is a composer born in Taiwan in 1965 and based in Austria since 1991. Her work focuses on contemporary instrumental composition and improvisation\, computer music\, and audiovisual interactivity. She has presented works and lectures internationally in Europe\, Asia\, and the Americas at events such as ICMC\, ISEA\, and NIME. From 2016 to 2019\, she taught for the Computer Music Studio at Bruckner University Linz. Since 1996\, she has co-run Atelier Avant Austria\, specializing in audiovisual interactive systems\, real-time processing and computer music. \n  \nOscar Corpo: Shamanic Protocol\nShamanic Protocol is an online sound ritual performed by a partially damaged virtual entity. Its memory is an incomplete and corrupted archive\, composed of residual sonic materials related to shamanic rituals\, music therapy\, sound-based healing practices\, and data derived from musical epigenetics. Reshaped by the available data and the presence of connected users\, these fragments are reprocessed and reorganised each time the system is accessed\, generating a sonic ritual that follows a recognisable structure yet never manifests in the same way twice. The sound ritual has no declared purpose: it remains unclear whether the entity performs the rite as an attempt to repair itself\, an act of archive restoration\, a process meant to affect human listeners\, or simply because this process constitutes its way of operating. The variability of the outcome may suggest either a gradual recovery or a progressive deterioration of the system. The resulting sonic output exists in a space between therapeutic effect\, system malfunction\, and autonomous algorithmic process. The shifts between fragile calm\, overload\, interruption\, and recovery reveal the instability of the system that generates it. No clear boundary is drawn between healing\, malfunction\, or expression: these states coexist and remain indistinguishable within the process. The rite can be experienced as a purely electronic process\, or human performers\, in any instrumental or vocal configuration\, may take part in its enactment. Musicians are invited to participate in the ritual rather than interpret a fixed musical text. Guided by an open\, interpretative score\, performers do not execute predefined material but engage in the ritual itself\, interacting with the electronic layer by listening\, responding\, and aligning their gestures with the evolving sonic environment. The notation offers indications of behaviour\, density\, register\, and gesture rather than prescribed material; in this way\, performers take part in the rite by freely amplifying\, refracting\, and destabilising the entity’s activity. The score prescribes no precise instrumentation or techniques; in this instance\, the ritual is performed with a string ensemble alongside soprano saxophone\, bass clarinet\, piano\, and percussion. Performers do not guide the system\, nor do they follow it; instead\, they remain in a state of attentive coexistence with its unfolding behaviour. Each performance is therefore situated\, shaped by specific conditions\, configurations\, and presences.\nThe process does not call for interpretation: repair and damage are no longer separable; function and meaning no longer distinguishable. \nAbout the artist\nOscar Corpo (born 8 April 1997\, Naples\, Italy) is an Italian composer based in Hamburg. He studied Composition and Multimedia Composition in Naples\, and is now a PhD candidate at the HfMT Hamburg\, focusing on AI and collective improvisation with Ensemble 404. His work spans electronic\, instrumental\, vocal\, improvisation\, and music theatre. He has collaborated with Alexander Schubert\, Berliner Philharmoniker\, La Biennale di Venezia\, and Lux Nova Duo\, among others. \n  \nRob Canning: A Walk in Polygon Field\nA Walk in Polygon Field is a graphic score environment for controlled improvisation\, composed for 1–4 instrumentalists with electronics and surround diffusion. Three polygons—pentagon\, hexagon\, heptagon—rotate at different rates\, producing polymetric phase relationships (5-against-6-against-7). Performers activate objects orbiting these shapes\, interpreting compound visual motion as sonic material. An outer ring generates OSC data driving spatial processing.\nThe score defines states\, behaviours\, and constraints; performers negotiate what these structures sound like. Each polygon side represents a discrete performance state—pitch region\, articulation\, texture—but specific mappings remain open. Musicians enter and withdraw from a shared texture whose density and pacing emerge from collective decision-making.\nAuthored entirely in SVG\, the work embeds performance semantics directly into visual element identifiers\, executed by a browser-based runtime on networked tablets. This approach\, detailed in the accompanying paper “Scores That Run: Graphic Notation with Embedded Performance Semantics\,” demonstrates how open web standards support animated notation without specialised infrastructure. Each performance traces a different route—music negotiated through shared encounter with a moving score. \nFull Guide to Interpretation\, Programme Notes and supporting materials including Supercollider live electronics patch are available online: \nhttps://robcanning.github.io/oscilla/compositions/polygonfield2026/ \nAbout the artist\nRob Canning (Dublin\, 1974) is a composer\, improviser\, and creative technologist whose work explores animated notation\, improvisation\, and the dynamics of networked musical systems. He holds a PhD in composition from Goldsmiths\, University of London\, where his research examined distributed authorship in computer-assisted music. A long-time advocate of Free and Open Source Software\, he develops Oscilla\, an open-source platform for animated graphic notation and networked performance. \n  \nDenis Polec: DEPRECATED  \nDEPRECATED establishes a recursive feedback loop between a biological subject and a cluster of interpretative algorithms. The work investigates the friction between human indeterminacy and machine determinism. \nThe Setup A lone performer occupies the center of the stage\, stripped of traditional instrumentation. Facing them is a “panopticon” of sensors: computer vision cameras and open microphones. The human subject oscillates between legible behavior and “abnormal” states—engaging in erratic gestures\, non-semantic vocalizations\, and visceral spasms designed to evade learned pattern recognition. \nThe Process Simultaneously\, three isolated AI instances dissect this input in real-time. Unable to process the chaotic reality of the “Now\,” the systems hallucinate: Computer Vision misinterprets trauma as choreography; a Large Language Model forces these errors into a coherent narrative; and Neural Audio Synthesis re-synthesizes the fabrication into sterilized perfection. \nAbout the artist\nDenis Polec operates at the intersection of sound art and algorithmic criticism. His practice rejects the notion of human-machine collaboration\, focusing instead on the friction\, latency\, and inherent violence of predictive systems. Polec constructs adversarial performance systems that expose the limitations of neural networks when confronted with the chaotic reality of the biological body. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/club-concert-3c/
LOCATION:Speicher am Kaufhauskanal\, Blohmstraße 22\, Hamburg\, 21079\, Germany
CATEGORIES:13-05,Club Concert,Music,Special Event
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260514T110000
DTEND;TZID=Europe/Amsterdam:20260514T173000
DTSTAMP:20260423T153918
CREATED:20260421T182814Z
LAST-MODIFIED:20260421T190152Z
UID:10000187-1778756400-1778779800@icmc2026.ligeti-zentrum.de
SUMMARY:Listening Room 2
DESCRIPTION:Makuta\nFelipe Otondo \nMechanization\nYu-Cheng Huang \nSpazio di accumulazione\nLeo Cicala \nThe Lament of Prince Hamlet\nChen Mu Hsi \nThe Throat of the Earth\nYe Peng \nTime Crystal Structure II\nHe Jing \nTriangle\nRay Fields \nUndercurrents\nAntonio Scarcia \n蜜蜂之后\nLia Su \n 
URL:http://icmc2026.ligeti-zentrum.de/event/listening-room-2-4/
LOCATION:Hamburg University of Technology\, Building A (A 0.14)\, Am Schwarzenberg-Campus 1\, Hamburg\, 21073\, Germany
CATEGORIES:14-05,Listening Room,Music
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260514T110000
DTEND;TZID=Europe/Amsterdam:20260514T173000
DTSTAMP:20260423T153918
CREATED:20260421T185520Z
LAST-MODIFIED:20260421T185520Z
UID:10000182-1778756400-1778779800@icmc2026.ligeti-zentrum.de
SUMMARY:Listening Room 1
DESCRIPTION:Dream Voyager: A Pilgrim of the Infinite\nZoe Yi-Cheng Lin \nMotes of Time\nYuming Sun \nEaves Verse\nShunhang Huang \nFulgore for audiovisual\nTakeyoshi Mori \nFusion of Horizons\nChi Wang \nIntertwine\nJohn Thompson \nNeon Reverie (ver. 2)\nWoon Seung Yeo and Ji Won Yoon \npORCELAIN\nDave O Mahony \nStellar Vibrato\nXingle Zhang \nStringDance: Ripples\nOuyang Mingshan \nVibe Higher (ver. 3)\nJi Won Yoon and Woon Seung Yeo \nWaves\nBike Öner \nOf Clouds and Clocks\nTom Williams
URL:http://icmc2026.ligeti-zentrum.de/event/listening-room-1-4/
LOCATION:Hamburg University of Technology\, Building A (A 0.18)\, Am Schwarzenberg-Campus 1\, Hamburg\, 21073\, Germany
CATEGORIES:14-05,Listening Room,Music
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260514T120000
DTEND;TZID=Europe/Amsterdam:20260514T140000
DTSTAMP:20260423T153918
CREATED:20260415T122311Z
LAST-MODIFIED:20260417T115252Z
UID:10000122-1778760000-1778767200@icmc2026.ligeti-zentrum.de
SUMMARY:[Off-ICMC] Rehearsal & Concert Visit for Families
DESCRIPTION:Photo: Max Henschel\n  \nWhat does contemporary music sound like? What happens during the rehearsals? And what challenges might occur? We’ll look into these questions during rehearsal and concert visit for families.  \nFor families with children aged 7+\nRegistration required here \n  \nThe Off-ICMC\nMusic is what brings us together\, even when everything else pulls us apart.\nMusic everywhere—it is part of our everyday lives. And yet\, we’re hearing it performed live on analog instruments less and less. Instead\, it often reaches us through speakers or headphones\, as files\, from the cloud. What does music mean to you? What does it sound like today? Where does it begin—and where does it end?\nThe ligeti center invites you to listen more closely and discover new sounds—to explore\, experiment\, and play. This year\, ICMC HAMBURG 2026 revives an old tradition: the Off-ICMC\, a free and accompanying festival curated for the general public and anyone curious about computer music. \nAll Off-ICMC events are free of charge.  \n\n  \n \n 
URL:http://icmc2026.ligeti-zentrum.de/event/off-icmc-rehearsal-concert-visit-for-families/
LOCATION:Hamburg University of Technology (TUHH)\, Am Schwarzenberg-Campus 5\, Hamburg\, 21073\, Germany
CATEGORIES:14-05,Music,Off-ICMC
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260514T133000
DTEND;TZID=Europe/Amsterdam:20260514T150000
DTSTAMP:20260423T153918
CREATED:20260421T162627Z
LAST-MODIFIED:20260423T122046Z
UID:10000093-1778765400-1778770800@icmc2026.ligeti-zentrum.de
SUMMARY:Lunch Concert 4A
DESCRIPTION:Concert 4A marks a special moment of collaboration between Hamburg’s local music scene and international composers. A particular highlight are two world premieres written especially for the renowned Hamburg-based double bassist John Eckhardt. Known for his explorations at the boundaries between new music and sound art\, Eckhardt here pushes the sonic extremes of his instrument in dialogue with the computer.\nAlongside the focus on the double bass\, the audience can expect a journey ranging from “electroacoustic romanticism” to AI-driven violin improvisations. \n  \nProgram Overview\nULYSSES II\nRoberto Cipollina and Eleonora Podestà \nThe Week\nHenrik von Coler \nEmpress Luo\nYao Hsiao \nconfim\, assim\, sem fim\nRodrigo Pascale \nThe Water lily in the blaze\nNatsuki Kambe \nLa Nuit Bleue\nZhixin Xu and Yunze Mu \n  \nAbout the pieces & artists\nRoberto Cipollina & Eleonora Podestà: ULYSSES II\nUlysses 2 is a project conceived by composer Roberto Cipollina. The work serves both as a performative and technological exploration of real-time performer-machine interaction\, emphasizing the role of AI not as a passive tool\, but as an active and adaptive musical agent within the creative process.\nThe work is conceived as a closed-form improvisational structure for acoustic instrument and real-time interactive electronics\, developed specifically to explore the creative potential of artificial intelligence in relation to the performer’s improvisation.\nAt the core of Ulysses 2 is the integration of Somax2\, a real-time generative system developed within the Max environment\, which enables responsive electronic behavior through the analysis and transformation of live performance data.\nWhile the project fully embraces aleatory elements and the concept of extemporaneity\, it also adheres to an organized formal structure that guides its overall development. In fact\, the performer engages with a series of prompts provided by the composer\, ensuring a coherent trajectory.\nThe electronic component\, built from a database of sampled sounds recorded by Eleonora Sofia Podestà\, responds and adapt to the performer’s expressive gestures in real-time. Through Somax2’s processing\, the system generates musically congruent textures and transformations.\nThis piece highlights the software’s ability to translate performance parameters into musically coherent electronic answers\, fostering a dynamic and co-creative dialogue between human performer and machine intelligence. \nAbout the artists\nRoberto Maria Cipollina is a composer and researcher in immersive technologies applied to music\, whose works have been performed across Europe and America. His compositions include A Lover’s Tale (2018)\, Alchimie (2020)\, Lu Re d’Amuri (2022)\, and Al-Qantarah (2024). Author of two musicological books and lecturer on palazzi della memoria in music\, artificial intelligence\, and virtual reality\, his works are internationally performed and published by Da Vinci Records. \nEleonora Podestà \n  \nHenrik von Coler: The Week\nOne Week is an acousmatic composition that integrates a staged reading in live performance. Drawing on an introspective autobiographical text\, it reflects on emotional states and personal experiences during periods of transition and uncertainty. The work may be understood as a form of Electroacoustic Romanticism: in line with the 2026 ICMC theme\, One Week translates romantic ideas into the language of electroacoustic music. In doing so\, it explores a balance between technological investigation and personal expressivity. At the same time\, the piece seeks to reach a broader range of listeners by foregrounding emotional engagement and incorporating a contemporary text that resonates with present-day cultural contexts. \nThe tape part of One Week is constructed from autobiographical field recordings combined with analog signal processing and experimental sound synthesis. In addition to conventional contemporary techniques\, the production draws on echo chambers\, analog and digital tape machines\, and vintage synthesizers and effects units. This process produces dense\, noisy\, and organic timbres and textures while consciously engaging with recognizable tropes of acousmatic music. During performance\, the tape part is live-diffused by the composer. Delivered in Ambisonics (up to seventh order)\, the work can be realized on a wide range of spatial sound systems\, in both 2D and 3D configurations. \nThe staged reading is performed by a musician and multimedia artist zl!ster\, who collaborated closely with the composer to refine the original text for performance. Through this revision\, the text is reshaped for the present moment while remaining anchored in the work’s autobiographical framework. \nAbout the artist\nPerformer: zl!ster is a Panamanian-American artist based out of Atlanta\, Georgia. His music embodies self-exploration through misinterpretations and exaggerations of real life. At times\, his work is a direct reflection of self; at others\, it is distorted\, shaped more by perception than reality. Rooted in curiosity and at times bravado\, his music lives in the realms of alternative rap and indie rock. \nComposer: Henrik von Coler is a musician and researcher\, working at the intersection of art\, science and technology. In 2024 he founded the Lab for Interaction and Immersion (L42i) at Georgia Tech’s School of Music. Before that he was the director of the Electronic Music Studio at TU Berlin and head of the Computer Music Team at the Audio Communication Group. In his research and creative work\, Henrik has explored various aspects of electronic music and musical instruments. This includes interface design\, algorithms for sound generation and experimental concepts for composition and performance. Most of his projects treat space as an integral part of music. In 2017 he founded the Electronic Orchestra Charlottenburg – an ensemble of up to 12 electronic musicians – to explore music interaction on immersive loudspeaker systems. He has since worked on ways to enhance how musicians and audiences experience spatial music and sound art. \n  \nYao Hsiao: Empress Luo\nxxx \nAbout the artist\nxxx \n  \nRodrigo Pascale: confim\, assim\, sem fim\n“confim\, assim\, sem fim” was composed in 2024 during the Laboratorio de Composición Mixta of Resonancias Iberoamericanas. It is dedicated to the Festival Expresiones Contemporáneas and to Francisco. This composition explores the concept of infinity within limited systems.\nThe pre-compositional research involved extensive explorations of harmonies based on mathematical ratios. I established a structure featuring 15 harmonies\, beginning with two frequencies at a ratio of 16/15. Each subsequent harmony added a new frequency derived from the initial ratio\, multiplied by a series of ratios following the sequence [16/15\, 15/14\, 14/13\, 13/12\, 12/11\, 11/10\, 10/9\, 9/8\, 8/7\, 7/6\, 6/5\, 5/4\, 4/3\, 3/2\, 2/1]. Notably\, some harmonies—including the second—utilized this sequence in reverse. For instance\, the ratio [15/14] was employed as the foundation for the first two frequencies\, while the third harmony emerged from multiplying [15/14] by [16/15]\, yielding [8/7].\nThe forward sequence often led to more dissonant harmonies\, while the backward sequence inclined towards consonance\, and I frequently juxtaposed the two. An exception occurred between harmonies 13 and 14\, where both utilized forward sequences to create heightened tension\, concluding in a consonant 15th harmony. The sequence employs a set of regressive numbers\, each divided by its preceding integer. This approach allows for the potential to extend beyond 2/1 to 1/0\, thus engaging with a well-known mathematical problem. As the results of division increase when the denominator decreases\, division by zero is said to “tend to infinity.”\nIn this exploration\, I realized that the logical conclusion of the composition was to approach infinity musically. However\, I confronted the challenge that the double bass can only produce a finite range of sounds\, and that the human hearing spans approximately from 20 Hz to 20 kHz. Faced with this problem\, I sought solutions that transcended the confines of the system itself. This led me to investigate how the limitations of our auditory perception could be brought to the forefront\, creating illusions of seemingly ever-rising glissandi and of rhythm turning to pitch. The transformation of percussive sounds into frequencies and the use of Shepard tones played a crucial role in this composition.\nconfim\, assim\, sem fim delves into the boundaries of auditory perception\, aiming to investigate the concept of infinity within limited systems. This composition begins with a sequence of harmonies\, where subtle facets of infinty aer explored through the techniques of the double bass. In its culminating section\, the work unveils the full potential of this exploration by incorporating exceptionally high frequencies and an enduring reverberation\, creating an immersive sonic landscape that invites listeners to experience the infinity within these media. \nAbout the artist\nRodrigo Pascale (b. 1996) is an internationally awarded Brazilian composer whose works have been performed worldwide by leading ensembles including JACK Quartet\, ICE\, MCME\, Splinter Reeds\, loadbang\, Hypercube\, Hinge\, and Sound Icon. A Prix CIME 2025 recipient and Gaudeamus Award 2026 Finalist\, he is pursuing a DMA at Peabody and has studied with Haas\, Kampela\, Fineberg\, Wubbels\, and Hersch. \n  \nNatsuki Kambe: The Water lily in the blaze \nI attempted to compose a piece that makes use of the wide range and rich timbral possibilities of the contrabass. In addition to the instrument’s inherent variety of tone colors\, I further explored new sounds through live electronics.\nThe low register conveys a powerful\, flame-like energy. The high register\, produced through flageolet harmonics\, has a beautiful tone with a delicate charm reminiscent of water lilies. These two contrasting elements are brought together into a single image: a burning sunset reflected on a pond\, and water lilies blooming in its shadow.\nIn Max\, I used TR.lib by Professor Takayuki Rai. Throughout the piece\, grbFM is used extensively: in the low register it generates noise such as quarter tones\, while in the high register it creates chordal textures inspired by the Japanese traditional wind instrument shō.\nI would like to express my heartfelt gratitude to Professor Takayuki Rai for the many valuable suggestions he provided during the creation of this work. \nAbout the artist\nNatsuki Kambe was born in 2004 in Yokohama\, Japan. They began studying piano at the age of five and started composition studies with Kazuo Mise at the age of fifteen. In 2020\, she graduated from the Music Department of Toho Girls’ High School.\nIn the same year\, they entered Toho Gakuen College of Music as a composition major and are currently a third-year student (as of January 2026). Since April 2024\, she has been studying computer music under Takayuki Rai. \n  \nZhixin Xu and Yunze Mu: La Nuit Bleue\nLa nuit bleue is a piece written for solo harpsichord and live electronics. After three years of harpsichord study\, I had a strong thought in my mind that write a piece for harpsichord and live electronics. After the spectral analysis of the harpsichord sound as well as look through some pieces like Saariaho’s Jardin Secret II and Cage’s HPSCHD\, I realized that live spectral processing of this kind of idiophonic sound would be a big challenge because of the broad frequency distribution in spectrum. So\, I decided to use both fixed sounds and live processed sounds in the electronic part. Jardin Secret II and HPSCHD inspired me a lot while looking for sounds for electronics. Both of them contain noisy and glitchy sound in the tape part which are homogenies to harpsichord sound in some aspect\, although somehow radical for the time they were composed\, they worked well for harpsichord sound. With this idea\, I set the tone of the timbral character for this piece. \nAbout the artist\nZhixin Xu is a composer\, sound artist and computer music researcher based in Shanghai\, China. His compositions often involving electronics\, sometimes generated by the software he develops. Much of his recent music has been focused on exploring how purely computer-generated sound materials can be used along with musical instruments and purely acoustic sounds. His music and multimedia works have been heard in the U.S\, Europe and Asia on many events including ICMC and SEAMUS conferences.\nXu holds a Doctor of Musical Arts degree from the University of Cincinnati’s College-Conservatory of Music where he studied with Mara Helmuth\, and earlier degrees from CCM and the Shanghai Conservatory of Music. He is now assistant professor at Shanghai Jiao Tong University. His compositions are available on the ABLAZE label. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/lunch-concert-4a/
LOCATION:Hamburg University of Technology\, Building I\, Audimax 2\, Denickestraße 22\, Hamburg\, 21073\, Germany
CATEGORIES:14-05,Concert,Music
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260514T180000
DTEND;TZID=Europe/Amsterdam:20260514T190000
DTSTAMP:20260423T153918
CREATED:20260415T122709Z
LAST-MODIFIED:20260421T201118Z
UID:10000123-1778781600-1778785200@icmc2026.ligeti-zentrum.de
SUMMARY:[Off-ICMC] Radioballett | Körperfunkkollektiv: "Fragment"
DESCRIPTION:Photo: Felix Konerding\n  \nRadioballett is an interactive performance that draws you into another world through wireless headphones\, where you and other participants can actively shape the space together.  \nThe piece “Fragment” explores the boundaries between private and public life through human experiences in both real and virtual worlds. It invites everyone to reflect on the balance between digital and “offline” existence and to engage with the interplay between social interaction and online networks. \nregistration required here \n  \nThe Off-ICMC\nMusic is what brings us together\, even when everything else pulls us apart.\nMusic everywhere—it is part of our everyday lives. And yet\, we’re hearing it performed live on analog instruments less and less. Instead\, it often reaches us through speakers or headphones\, as files\, from the cloud. What does music mean to you? What does it sound like today? Where does it begin—and where does it end?\nThe ligeti center invites you to listen more closely and discover new sounds—to explore\, experiment\, and play. This year\, ICMC HAMBURG 2026 revives an old tradition: the Off-ICMC\, a free and accompanying festival curated for the general public and anyone curious about computer music. \nAll Off-ICMC events are free of charge.  \n\n  \n \n 
URL:http://icmc2026.ligeti-zentrum.de/event/off-icmc-radioballet-korperfunkkollektiv-fragment/
LOCATION:Town Hall Square Harburg\, Harburger Rathausplatz 1\, Hamburg\, 21073\, Germany
CATEGORIES:14-05,Music,Off-ICMC
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260514T190000
DTEND;TZID=Europe/Amsterdam:20260514T210000
DTSTAMP:20260423T153918
CREATED:20260421T163025Z
LAST-MODIFIED:20260422T124019Z
UID:10000096-1778785200-1778792400@icmc2026.ligeti-zentrum.de
SUMMARY:Evening Concert 4B
DESCRIPTION:Concert 4B presents the full range of contemporary computer music in a chamber ensemble setting. Ensemble 404—Hamburg’s specialists in new music—navigates a program that spans highly spatialized sound worlds to audiovisual metamorphoses.\nExperience how physical instruments meet the precision of algorithms\, creating new hybrid identities in the process. \n  \nProgram Overview\nKryptobioza\nLidia Zielinska \nTide\, breath\nZihan Wang \nEverybody Loves Me\nHoward Kenty \nPresent-Day Jakuchu Series: Butterfly Pictures “Inachis io”\nNaotoshi Osaka \nComing and Vanishing \nYixuan Zhao \nZusammenspiel I\nJavier Alejandro Garavaglia \nVesscape\nDanni Zhao and Congren Dai \n  \nAbout the pieces & artists\nLidia Zielinska: Kryptobioza\nCryptobiosis is a reversible\, temporary state of extreme reduction in life activities of a composer\, as a response to unfavourable environmental conditions. \nAbout the artist\nLidia Zielinska (*1953) – Polish composer\, professor-emeritus of composition and director of the Electroacoustic Music Studio at the Poznan Academy of Music; numerous awards for orchestral\, multimedial\, electroacoustic works; books\, papers\, guest lectures\, summer courses in Europe\, both Americas\, Asia\, New Zealand; vice-president of the Polish Society for Electroacoustic Music. \n  \nZihan Wang: Tide\, breath\nThis work integrates spatialised fixed-media electronic music with semi-improvised acoustic instrumental performance. Animated scores and sound scores are employed to guide performers and to synchronise their actions with the electronic sections. The compositional focus is spatial counterpoint which extending the interplay of traditional contrapuntal voice relationships into three-dimensional space. This approach generates perceptible parallels\, interweaving\, imitation\, and conflict between instrumental and electronic elements through the parameters of position\, distance\, diffusion\, and timbre. Spatial attributes therefore function as primary compositional parameters rather than post-production effects.\nThe work is inspired by reflections on the macro and micro-structures of two kinds of sound: human crowds and natural environments. Through extensive field recording\, I observed a shared underlying principle: both soundscapes arise from the continuous accumulation and interaction of innumerable micro-sonic events\, producing macro-level shifts in energy\, fluctuations in density\, and emergent directional tendencies. For example\, footsteps\, conversations\, breathing\, and whispers in a crowd collectively form an ever-shifting granular timbre. Similarly\, natural sounds such as rain\, wind\, rivers\, and flocks of birds can exhibit comparable behaviours. This work seeks to establish a perceptual and structural connection between these two sound worlds through electronic composition. \nAbout the artist\nZihan Wang is an electroacoustic music composer\, film composer\, and sonic artist. He is currently a post graduate research student at Monash University\, Melbourne\, Australia\, where his work investigates compositional strategies for ambisonics-based environments. His research engages with Robert Normandeau’s concept of timbre spatialisation and Denis Smalley’s theory of spectromorphology\, with a particular emphasis on timbre\, spatial articulation\, and electroacoustic composition. His creative practice includes fixed-media electroacoustic works\, sound installations\, animated score composition\, and film scoring. His work has been presented at venues and conferences including TENOR 2025 and the Melbourne International Film Festival (MIFF). \n  \nHoward Kenty: Everybody Loves Me\n“Everybody Loves Me” is a piece for voice\, percussion\, and live electronics that takes the words of Donald Trump as its only source material to depict a hellish kinetic nightmare. For this incarnation\, I would be the vocal performer\, and control the electronics onstage. I would need a percussion performer and the percussion itself to be provided by the festival. \nAbout the artist\nHowie Kenty is a Brooklyn-based composer and performer\, occasionally known by his musical alter-ego\, Hwarg. His music\, called “remarkable” with “astonishing poetic power” (Intl Compendium Prix Ars Electronica)\, is stylistically diverse\, encompassing ideas from contemporary classical\, electronic\, rock\, and ambient genres\, as well as sound art\, political issues\, and visual and theatrical elements. Howie is an Assistant Professor in Studio Composition at Purchase College. Listen at http://www.hwarg.com. \n  \nNaotoshi Osaka: Present-Day Jakuchu Series: Butterfly Pictures “Inachis io”\nIto Jakuchu (1716–1800) was a mid Edo period Japanese painter renowned for his brilliantly colored depictions of plants and animals. I have long been fascinated by his works. There was a time when I myself collected butterflies\, and I was deeply captivated by the designs and patterns on their wings. This piece is inspired by those wing patterns\, transforming their visual designs into musical imagery. Jakuchu also painted butterflies\, and with the idea of composing as if I myself were Jakuchu painting a picture\, I titled this work as part of my “Present-Day Jakuchu” series.\nWhen visual and auditory perception are viewed at a higher level of abstraction\, they share many common qualities. In this work\, the visual impressions of the butterfly are linked to the sounds and musical structure.\nInachis io (The European Peacock Butterfly) has eye spot patterns on a reddish brown ground\, reminiscent of a peacock’s feathers\, which gives the species its name. Although it is not found in North America\, South America\, or Oceania\, it is widely distributed across the Eurasian continent\, including Europe and Asia. Many butterflies of the Nymphalidae family are elegant in appearance\, and this species is no exception; it can be seen in many places. In the composition\, I developed the music around two motifs: the background coloration and the eye spot patterns. Unlike my previous work\, this piece does not depict flight or resting behavior; instead\, it focuses solely on the coloration and patterns visible when the wings are fully spread.\nThis piece was originally written in 2023 for violin and piano. For this performance\, it has been newly expanded with an added electroacoustic part\, making this the premiere of the updated version. The electroacoustic materials were created as fixed media\, primarily using granular synthesis and FM synthesis. However\, the sound files are structured as passage level cues\, and their playback timing is performer controlled and triggered in real time. \nAbout the artist\nNaotoshi Osaka received his Master’s degree from Waseda University and\, after working at NTT Laboratories\, has pursued research and composition in electroacoustic music. His works have been selected for the ICMCfive times\, and for the New York City Electroacoustic Music Festival (NYCEMF)three times. He served as President of the Japan Society for Sonic Arts (JSSA) from 2009 to 2018. He is currently a research fellow at Waseda University and Tokyo Denki University\, holds a Ph.D. in engineering\, and is Professor Emeritus at Tokyo Denki University. \n  \nYixuan Zhao: Coming and Vanishing  \nComing and Vanishing is an Audiovisual work for solo flute and electronics that explores a transient and unstable phenomenon.\nThe flute interacts closely with the electronic layer through air sounds\, breath tones\, and extended techniques. Pitch and noise are deliberately blurred\, allowing the instrument to function not as a melodic foreground but as a fluctuating presence. The electronic part is primarily built from processed human whispers and breaths\, materials detached from linguistic meaning. Through subtle layering and diffusion\, the voices lose semantic clarity and become abstract sonic matter. Acoustic and electronic sound exist in a continuous state of mutual negotiation\, shaping and destabilizing one another in real time.\nThe visual draws inspiration from traditional Chinese landscape painting while incorporating a surrealist sensibility. Through gradual transformations of light and shadow\, the imagery reveals and amplifies microscopic details within the sound. Rather than illustrating the music\, the visuals function as a parallel perceptual layer\, extending the listening experience into a spatial and visual field.\nSound and visual are not merely layered media\, but revealing a dynamic process\, existing only within the persistent tension between appearance and disappearance\, presence and loss\, immediacy and dissolution. \nAbout the artists\nComposer: ZHAO Yixuan is a composer\, a lecturer at the Dept. of Music AI and Music Information Technology\, Central Conservatory of Music\, China\, and a visiting researcher at the Royal Birmingham Conservatoire\, UK.\nShe has been dedicated to exploring the practice of digital audio and artificial intelligence in music composition and collaborating with performers to search for more possibilities in technological performance environments. Her composition spans interactive music\, electroacoustic music\, contemporary music\, and new media art. \nVisual Designer: WU Shuangqi (/’su:ki/) is an inter-media creator and visual-physical experimenter engaged in visual media\, contemporary theatre\, physical improvisation\, visual design\, sound\, audiovisual\, photography\, editing\, etc.\nHer creations are mainly based on physical experience\, deconstructing and visually outputting the body and external information\, intending to explore the assembly\, pattern\, motivation and form in the algorithms of flesh and behaviour\, to gain extension in perversion and mutation. \n  \nJavier Alejandro Garavaglia: Zusammenspiel I\nElectroacoustic composition in which a live viola and a live clarinet (in A) are combined with spectral digital DPS effects and multi-channel spatialisation in 8.1 surround sound. The latter applies Ambisonics (4th order) in real-time added to a system developed by the composer and documented in several international papers and articles (including MIT’S Computer Music Journal). \nAbout the artist\nAward-winning composer\, violist\, sound artist and retired university music professor with a broad and interdisciplinary approach to digital art and related technologies. His work focuses primarily on various aspects of music/sound composition and performance supported by computing\, with a constant search for new sonic experiences combining new developments in computer-aided sound synthesis\, live interaction\, extended instrumental techniques and sound spatialisation. Compositions are performed/broadcast in Europe\, America and Asia in world-renowned concert halls/broadcasters and include electroacoustic music (acousmatic\, interactive\, multimedia)\, instrumental music (e.g.\, solo instrument\, ensemble & orchestra) and sound art (e.g.\, installations). Plenty of his acousmatic music can also be found on commercial CDs by Edition DEGEM\, Cybele\, EMF\, etc. \nInfo: https://tinyurl.com/JavierGaravaglia \n  \nDanni Zhao and Congren Dai: Vesscape\nThis work repeatedly performs the same action: pouring sound into a hollow system. \nThe breath of the flute is not treated as lyrical material\, but as a continuously failing act\, namely\, blowing\, gasping\, breaking\, and losing control. Pitches emerge again and again\, yet never settle. The electric bass introduces low-frequency pressure and inertia\, an irresistible downward pull that keeps the entire sound field at the edge of overload. \nA live electronic system analyses the performed sound using AI\, distributing features such as breath\, impact\, and pitch deviation across multiple “vessel” sound sources and visual entities. In its touring performance version\, the original vessel installation has been translated into an 8.1 spatial audio field\, allowing the acoustic presence and directional behavior of the vessels to be simulated through multichannel diffusion. These vessels are not metaphors for containers; they function as receivers of pressure\, being filled\, stretched\, and forced into vibration. The harder the music pushes\, the more unstable the vessels become; when the performer attempts to regain control\, the system exposes even more fractures. \nThe structure begins with an almost violent injection of energy\, gradually shifting into a direct confrontation between body and object. Unstable registers and microtonal deviations are continuously amplified; rhythm is fragmented into dense\, short bursts of broken gestures\, until the system briefly collapses. In the end\, sound is exhausted\, leaving only residual breath and unfinished pitch afterimages. \nThis is not a work about “generation”. It is a sustained experiment in pressure\, control\, capacity\, and limits. The system never truly responds to the performer; it merely records how pressure fails\, again and again. \nAbout the artists\nDanni Zhao is a Chinese composer and electronic music artist. She studies Electronic Music Composition at the Central Conservatory of Music\, where she received the National Scholarship and recommendation for postgraduate study. Her works have won awards at international composition and electronic music competitions and have been presented at events such as ICMC and major music festivals. She is active in concert music\, film\, documentary\, theatre\, and game scoring. \nCongren Dai is a PhD candidate at the Central Conservatory of Music\, specialising in Music AI. He holds an MRes in AI and Machine Learning from Imperial College London and an MSc in Data Science from King’s College London. Having interned in computer vision at Google and engaged in music AI projects at Huawei\, he now applies Large Language Models to musical score understanding and instrument recognition in his research\, alongside contributions to continual learning. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/evening-concert-4b/
LOCATION:Friedrich-Ebert-Halle\, Alter Postweg 34\, Hamburg\, 21075\, Germany
CATEGORIES:14-05,Concert,Music
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260514T213000
DTEND;TZID=Europe/Amsterdam:20260514T233000
DTSTAMP:20260423T153918
CREATED:20260421T163434Z
LAST-MODIFIED:20260423T122902Z
UID:10000069-1778794200-1778801400@icmc2026.ligeti-zentrum.de
SUMMARY:Club Concert 4C
DESCRIPTION:Program Overview\nMerzmania\nGintas Kraptavicius \nImprovisation for Spheres \nCalvin McCormack \nMarsia 3\nJonathan Impett \noscheat\nMoritz Wesp\, Eric Haupt and Victor Gelling \nThe Skin of the Earth: Fragments\nPaulo C. Chagas \nThe Long Now III \nCat Hope and Juan Parra Cancino \nTape Speed and Feedback\nAndrew Loveless \n  \nAbout the pieces & artists\nGintas Kraptavicius: Merzmania\nElectroacoustic live electronics performance made using my own created instrument made from computer\, Plogue Bidule software & midi controller assigned to VST plugins. All software parameters controlled\, altered live in a real time during performance using knobs & sliders of midi controller attached to VST plugins parameters. Performance made from synthesized sounds\, no samples or before recorded sounds as fields’ recordings are used. Merzmania it is piece connecting classical music skills with today noise music (slight allusion to noise icon – Merzbow). Merzmania main playing method is real time interaction with computer which i am using on all my live compositions. I am using Computer as Music Instrument just like any other acoustic music instrument. Like a guitar. Onstage i get the same emotional feeling playing with computer as playing with any other acoustic/electric instrument. Main thing in a live performance it is energy and emotion to the pot like to rock’n’roll concerts. Merzmania featuring the motif of the Lithuanian folk song “Teka\, teka šviesi saulė” (“The sun is rising\, the bright sun is rising”). \nAbout the artist\nGintas K (Gintas Kraptavičius) a Lithuanian sound artist\, composer living and working in Lithuania.\nNowadays Gintas is working in the field of digital experimental and electroacoustic music\, making music for films\, sound installations. His compositions are based on granular synthesis\, live electronic\, hard digital computer music\, small melodies. Collaborations with sound artists @c\, Paulo Raposo\, Kouhei Matsunaga\, David Ellis and many others. He has released numerous of records on labels such as Cronica\, Baskaru\, Con-v\, Copy for Your Records\, Bolt\, Creative Sources\, Sub Rosa and others.\nSince 2011 member of Lithuanian Composers Union. He has presented his works\, performed at various international festivals\, conferences\, symposiums as Transmediale.05\, Transmediale.07\, ISEA2015\, ISSTA2016\, IRCAM forum workshop 2017 \, xCoAx 2018\, ICMC2018\,ICMC2022 ICMC2025 ICMC-NYCEMF 2019\, NYCEMF 2020 \, NYCEMF 2021\, NYCEMF 2022\, NYCEMF 2023\, NYCEMF 2024\, NYCEMF 2025\, Ars Electronica Festival 2020\,. Ars Electronica Festival 2023 Ars Electronica Festival 2024 . IRCAM forum workshop 2025 Paris Ars Electronica Forum Wallis 2025\, FARM 2025\nArtist in residency at DAR 2016\, DAR 2011 \, MoKS 2016\, KKKC 2023\nWinner of the II International Sound-Art Contest Broadcasting Art 2010 \, Spain.\nWinner of The University of South Florida New-Music Consortium 2019 International Call for Scores in electronic composition category. \n  \nCalvin McCormack: Improvisation for Spheres\nImprovisation for Spheres is a live electronic work for two custom spherical controllers with reactive visuals. Each sphere combines surface-embedded capacitive touch pads with an inertial measurement unit\, wirelessly transmitting sphere orientation and touch sensing. Each sphere sits in a chalice cradle\, with a ring of touch sensors embedded around the rim. The spherical form factor affords intuitive spatialization\, the sphere’s rotation corresponds to the sound’s position in ambisonics\, making spatial movement as immediate and embodied as pitch selection. Touch pads support expressive melodic and harmonic performance\, and skin-touchpad contact area allowing dynamic and timbral expression. The work explores the sphere as both instrument and spatializer\, where single gestures unite melodic\, timbral\, and spatial control. This audiovisual improvisation demonstrates how spatialization can be performed artistically rather than mixed\, elevated from post-production to real-time expression. \nAbout the artist\nCalvin McCormack is an MST student at the Center for Computer Research in Music and Acoustics (CCRMA) at Stanford University. His research focuses on accessible HCI and inclusive design for musical applications. He also conducts research in auditory neuroscience and plays jazz guitar. \n  \nJonathan Impett: Marsia 3\nThis is the final piece of a series written for the installation Apollo e Marsia in 2024. This work expands the moment in time represented by Tintoretto in his painting La gara tra Apollo e Marsia (c.1545). Apollo\, playing a bowed instrument with sympathetic strings\, has been challenged by the satyr Marsia\, playing a woodwind instrument\, to see who is the greater musician. Ovid’s retelling of the story describes a terrible end for Marsia\, but in the moment depicted by Tintoretto both musicians are waiting for the judgement of Midas\, both trying to remember and assess what they and their competitor have just played. \nThe piece is therefore a play on the nonlinearity of memory under stress as both try to replay the performances in their mind. Moments are recalled\, replayed or intrude\, but are always changing in their reconstruction. Memories of themselves and of the other constantly modulate each other. New constructs emerge in memory through this process\, and obsessive recall generates attractors and mirrors; we know from recent neuroscience that remembering and imagining are essentially the same reconstructive process. \nAt its root\, the material all derives from two hymns to Apollo inscribed in stone at Delphi\, arguably the earliest remaining instances of music notation\, and likewise fragmented by erasures. Across time\, musicians have attempted to reconstruct this partially-lost memory in different ways\, creating new formations in the process. \nHere\, the Delphic material is subject to layers of nonlinear memory process\, implemented in Open Music as forward- and backward-moving wave phenomena\, sweeping up emergent patterns as they develop. This produces a score that often requires the performer to assimilate a polyphony of musical materials and physical behaviours as layers of memory. Analogous processes are used in the recorded and live sound processing\, largely through physical modelling\, cross-resynthesis and filtering – digital and analogue. This is in turn heard through a model of the stringed instrument of Marsia’s opponent\, Apollo. An AI brings the live performance into relation with the behaviours\, memory and projection of both competitors. \nAbout the artists\nJonathan Impett (1956) is a composer\, trumpet player and writer. His work is concerned with the discourses and practices of contemporary musical creativity\, particularly the nature of the technologically-situated musical artefact. Activity in the space between composition and improvisation has led to continuous research in the areas of interactive systems\, interfaces and modes of collaborative performance. Recent works combine installation\, live electronics and computational models with notated and improvised performance\, using fluid dynamics as a unifying behavioural model. A new project Anamnesis takes a radical approach to AI\, identifying creative paths implied but unnoticed. He leads the research group “Music\, Thought and Technology” at the Orpheus Institute\, Ghent. \nRichard Craig (alto flute) was born in Glasgow. He studied at the Royal Conservatoire of Scotland and the Conservatoire de Strasbourg. He performs with groups such as Musikfabrik\, Klangforum Wien\, ELISION and in Scandinavia with CAPUT\, Kammarensemblen. He has released two solo discs of contemporary works\, Vale and Inward\, and recorded for Another Timbre\, Wergo\, FHR\, Métier\, as well as SWR\, BBC and Finnish Radio. Not only a celebrated advocate of contemporary music\, his recent album of the Telemann Fantasias and his improvisations was lauded as “bold\, beautiful and clever” (Gramophone). He is also an improviser\, composer and teacher\, currently Director of Performance at the University of Edinburgh. \n  \nMoritz Wesp\, Eric Haupt and Victor Gelling: oscheat\nThis contribution presents oscheat\, a work-in-progress OSC-based interface\, designed to extend ensemble communication beyond conventional musical gestures. By providing a modular and user-friendly environment\, oscheat allows performers to directly control each other’s digital instruments\, enabling novel forms of interaction\, role-sharing\, and emergent musical structures in real time.\nOur instrumental system is structured into three functional sections reflecting core musical building blocks: synthesizers for melodic and harmonic material\, sequencers for rhythmic organization\, and samplers for vocal and sound-based material.\nAdditional functionality includes real-time MIDI recording and looping\, pitch mapping with support for alternative tunings\, spatialization\, and global macro controls for large-scale structural manipulation. Each performer manages their instruments individually while making the controls accessible through oscheat.\nMoritz Wesp\, Eric Haupt and Victor Gelling are playing an eight-minute improvisation\, demonstrating oscheat’s potential for rapid musical exchange\, shared authorship\, and collective decision-making. By exposing critical control parameters to all participants\, the interface encourages social negotiation and flexible role allocation\, making it relevant for both creative research and educational contexts. \nAbout the artists\nMoritz Wesp lives in Cologne (GER) and plays trombone\, virtual trombone and other instruments that he designs\, programs and builds. As an improviser he is working with different ensembles like Mariá Portugal Erosao\, Matthias Muche’s Bonecrusher or the Simon Rummel Ensemble. Besides this he composes music and is part of the Audio-VR project SONA. \nEric Haupt is a guitarist and composer working in experimental music and punk. He completed his Bachelor of Music at the HfMT Cologne in 2018. He is a founding member of the ensembles Now My Life Is Sweet Like Cinnamon and Lawn Chair\, as well as the initiator of the experimental game-show performance Sport1. His music has been presented at festivals throughout Europe and collaborations include internationally renowned producers Olaf O.P.A.L. and Chris Coady. His punk compositions have been broadcast on international radio stations such as BBC Radio 6 Music. \nVictor Gelling is an improviser and composer who uses stringed instruments including but not limited to upright bass\, tenor banjo\, Pedalsteel- and Nonpedalsteel-Guitars in addition to pedals\, synthesizers and barely working self-coded computer programs to create sounds. Their work spans genres from jazz to noise to electric cowboy songs to complex music\, which culminates in their large ensemble works with Trash & Post-Chaotic Music\, their alt-country/post-punk alias Slowklahoma\, solo works or their playing in the Jorik Bergman Trio. \n  \nPaulo C. Chagas: The Skin of the Earth: Fragments\nAbout the artists\nPaulo C. Chagas is a Brazilian-American composer and Professor of Composition at the University of California\, Riverside. With over 220 works across orchestral\, chamber\, electroacoustic\, audiovisual\, and multimedia formats\, his work integrates advanced technology and expressive depth. He studied in Brazil\, Belgium\, and Germany\, earning a Ph.D. from the Université de Liège\, and was composer-in-residence at the WDR Electronic Studio. A Fulbright Scholar (Berlin\, 2022–23) and ICMA board member\, his work is widely performed and published.\nhttps://solo.to/paulocchagas \nBrazilian soprano Adriane Queiroz trained in Pará\, Missouri\, and Vienna. Since 2002/03 she has been a member of the Staatsoper Unter den Linden\, performing roles such as Pamina\, Micaëla\, Susanna\, and Liù. She has appeared at major venues including the Hamburg State Opera\, Semperoper Dresden\, and Wiener Festwochen\, and in concerts at the Musikverein and Konzerthaus Vienna. Her repertoire spans Mozart to contemporary works\, including Schönberg’s Erwartung and Nono’s La fabbrica illuminata\, with recent premieres under Sir Simon Rattle.\nwww.adrianequeiroz.com \n  \nCat Hope and Juan Parra Cancino: The Long Now III  \nThis a scored work for live modular synthesiser performance\, with a backing track. It explores the potential of digital notation for modern electronic instruments\, in this case\, the contemporary modular synthesiser. It is named after the Long Now Foundation\, that aims to provide counterpoint to today’s accelerating culture by encouraging long-term thinking\, fostering responsibility in the framework of the next 10\,000 years. Music provides complex answers to the question of “How Long is Now?”\, and in this work\, a slow descent into very low sound by the performer\, where pitch is either uncontrollable or almost inaudible\, reflects the limits of human action in and perception of sound as it passes through time\, highlighting that there may be other ways to listen\, and other ways to experience our passing through time.\nThe fixed media part of this piece was created at EMS in Sweden\, using the Buchla 200’s 4 x 259 waveform generators and the score is read on the Decibel ScorePlayer\, which also produces the fixed media part. \nAbout the artists\nJuan Parra Cancino studied Composition at the Catholic University of Chile and Sonology at the Royal Conservatoire The Hague\, earning a Master’s degree focused on electronic music composition and performance. In 2014\, he completed his PhD at Leiden University with his thesis “Multiple Paths: Towards a\nPerformance Practice in Computer Music. Parra has been a research fellow at the Orpheus Institute since 2009. \nCat Hope is a award winning Australian composer who focuses on the extremes of sound – from extreme noise to barely audible delicacy. Her works have been performed world wide by ensembles such as Yarn Wire (US)\, the BBC Scottish Symphony (UK) and her works are published internationally on labels such as Hat (Hut) Art\, with her monograph CD Ephemeral Rivers winning the German Critics Prize in 2017. Cat is a represented composer with the Australian Music Centre\, and her music is published by Material Press. Her first opera\, Speechless\, won the Best New Dramatic work in the 2020 Art Music Awards. \n  \nAndrew Loveless: Tape Speed and Feedback\nThis performance presents a live realization of a dual-transport digital tape instrument designed for exploratory composition using playback speed manipulation and controlled feedback. It is performed using a custom-designed system which includes a live visualization that displays the spinning reels to indicate the playback speed of each transport. This provides an engaging visual element that helps the audience follow the sounds as they unfold.\nThe source of the sound material is the distinct\, high-pitched whine of a CRT television’s flyback\ntransformer\, which was chosen for its nearly inaudible high-frequency energy and analog character. One transport initially auditions the sound at normal speed before being dramatically slowed to reveal its hidden textures. The second transport is then introduced at a carefully tuned speed ratio\, allowing the two sources to harmonize and phase against one another. These relationships produce beating patterns and periodic pulses that arise solely from speed interactions rather than from discrete sequencing or event-based control.\nAs the piece develops\, the output of one transport is routed into the input of the other\, introducing overdubbing and pitch-shifted layering. This process generates additional sound material while maintaining continuity with the original material. The performance is further extended by the routing configuration and playback speed chosen during the performance\, rather than fixed delay parameters. Throughout the performance\, changes are gradual and continuous\, allowing structure to emerge organically from simple operational constraints.\nThe performance concludes with a slow attenuation of the feedback\, allowing layers to dissipate organically. Instead of presenting a fixed composition\, the work is shaped through live interaction with the instrument. In doing so\, the performance situates historical tape music techniques within a contemporary digital context. \nAbout the artist\nAndrew Loveless is a graduate student in Music Technology at the Georgia Institute of Technology. Their work focuses on performance-centered instrument design and improvisation\, with an emphasis on preserving tape music techniques and making them more accessible through hands-on\, educational tools. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/club-concert-4c/
LOCATION:ligeti center\, Production Lab (10th floor)\, Veritaskai 1\, Hamburg\, 21079\, Germany
CATEGORIES:14-05,Club Concert,Music
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260515T183000
DTEND;TZID=Europe/Amsterdam:20260515T213000
DTSTAMP:20260423T153918
CREATED:20260415T122932Z
LAST-MODIFIED:20260417T115457Z
UID:10000124-1778869800-1778880600@icmc2026.ligeti-zentrum.de
SUMMARY:[Off-ICMC] Sound Bar: "Sono\, ergo sum." – I sound\, therefore I am.
DESCRIPTION:Photo: Soundbar Kollektiv\n  \nThe Soundbar is a performative pop-up bar that brings together socializing\, drinks\, and jam sessions. It serves as a workshop and experimental space\, offering an environment for exploring sound\, finding inspiration\, and connecting with others. What does your favorite drink sound like? Join us for Soundbar’s vibrant sound journeys. Let your glasses sing and discover new levels of sensory experience at the bar.  no registration required \n  \nThe Off-ICMC\nMusic is what brings us together\, even when everything else pulls us apart.\nMusic everywhere—it is part of our everyday lives. And yet\, we’re hearing it performed live on analog instruments less and less. Instead\, it often reaches us through speakers or headphones\, as files\, from the cloud. What does music mean to you? What does it sound like today? Where does it begin—and where does it end?\nThe ligeti center invites you to listen more closely and discover new sounds—to explore\, experiment\, and play. This year\, ICMC HAMBURG 2026 revives an old tradition: the Off-ICMC\, a free and accompanying festival curated for the general public and anyone curious about computer music. \nAll Off-ICMC events are free of charge.  \n\n  \n \n 
URL:http://icmc2026.ligeti-zentrum.de/event/off-icmc-sound-bar-sono-ergo-sum-i-sound-therefore-i-am/
LOCATION:ligeti center\, Production Lab (10th floor)\, Veritaskai 1\, Hamburg\, 21079\, Germany
CATEGORIES:15-05,Music,Off-ICMC,Performance
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260515T193000
DTEND;TZID=Europe/Amsterdam:20260515T203000
DTSTAMP:20260423T153918
CREATED:20260415T123232Z
LAST-MODIFIED:20260417T115504Z
UID:10000125-1778873400-1778877000@icmc2026.ligeti-zentrum.de
SUMMARY:[Off-ICMC] Experimental Reading: Harburg. Das Buch – Excursions in Voice\, Photo & Music (German)
DESCRIPTION:Credits: Junius Verlag\n  \nAuthor Bärbel (Bascha) Wegner\, photographer Steven Haberland\, and musician Clarks Planet bring together text\, images\, and sound in a multi-layered exploration of the city of Harburg. Storytelling meets improvised music\, photographs interact with sound and field recordings.  \nThe familiar takes on new shapes\, improvisation unfolds—opening up fresh perspectives on the neighborhood\, not least from the vantage point of the Production Lab on the 10th floor.  \nIn German only.\nregistration required here \n  \nThe Off-ICMC\nMusic is what brings us together\, even when everything else pulls us apart.\nMusic everywhere—it is part of our everyday lives. And yet\, we’re hearing it performed live on analog instruments less and less. Instead\, it often reaches us through speakers or headphones\, as files\, from the cloud. What does music mean to you? What does it sound like today? Where does it begin—and where does it end?\nThe ligeti center invites you to listen more closely and discover new sounds—to explore\, experiment\, and play. This year\, ICMC HAMBURG 2026 revives an old tradition: the Off-ICMC\, a free and accompanying festival curated for the general public and anyone curious about computer music. \nAll Off-ICMC events are free of charge.  \n\n  \n \n 
URL:http://icmc2026.ligeti-zentrum.de/event/off-icmc-experimental-reading-harburg-das-buch-excursions-in-voice-photo-music-german/
LOCATION:ligeti center\, Production Lab (10th floor)\, Veritaskai 1\, Hamburg\, 21079\, Germany
CATEGORIES:15-05,Music,Off-ICMC,Performance
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260515T200000
DTEND;TZID=Europe/Amsterdam:20260515T220000
DTSTAMP:20260423T153918
CREATED:20260421T171512Z
LAST-MODIFIED:20260422T132957Z
UID:10000177-1778875200-1778882400@icmc2026.ligeti-zentrum.de
SUMMARY:Evening Concert 5B (Lübeck)
DESCRIPTION:Program Overview\nImprovising Machine #7325: Inside My Trumpet\, Again\nJeff Kaiser \nThe Letter\nMinho Kang \nMoloch whose mind is pure machinery!\nEric Lyon \nTidal Unit for Sonic Activities\nIlia Viazov and Nicola Leonard Hein \nRhythmic Traces | Twisted Electronics\nNicola Leonard Hein \nFound Violin x Aromantic Hobby \nDong Zhou \nTokens & Strings: an improvisation between an electric guitarist and a local LLM\nOlivier Jambois \n  \nAbout the pieces & artists\nJeff Kaiser: Improvising Machine #7325: Inside My Trumpet\, Again\n“Improvising Machine #7325: Inside My Trumpet\, Again” places the audience inside a trumpet\, exploring the instrument’s interior sonic world through an immersive human–machine improvisation system. The work is built from an extensive\, purpose-built sample library captured by placing microphones deep within the instrument. These samples document the mechanical sounds and embodied actions of trumpet performance without the instrument being played traditionally—collections of the sound of valves descending\, springs releasing\, air being compressed and released by slides\, valve caps loosening\, spit-valve gurgles\, and a range of non-tonal lip\, air\, and tongue sounds produced through the mouthpiece and leadpipe. \nTwenty-eight autonomous virtual agents (“robots”)\, authored by the composer in Max/MSP and hosted in Ableton Live\, inhabit a 360-degree ambisonic field surrounding the audience. Each agent draws from its own subset of the sample library and listens to the live trumpet performance in real time. Their behaviors fluctuate between responsive and indifferent\, generating shifting environments that range from highly chaotic to unexpectedly calm. As a result\, the improvising performer becomes entangled with a machine ensemble that both reflects and subverts the human gestures\, creating a continuously changing dialogue between human and technological agents. \nAbout the artist\nJeff Kaiser is a trumpet player\, media technologist\, and scholar. Classically trained as a trumpet player and composer\, Kaiser now takes an integrative\, systemic view that involves his traditional instrument\, emergent technology (in the form of custom interactive/generative software and hardware interfaces)\, space\, and audience: all being critical and integral participants in his performances. He gains inspiration and ideas from the rich history of experimental improvisation and composition\, as well as cognitive science\, and the vast timbral and formal affordances provided by combining traditional instruments with new and repurposed technologies. The roots of his music are firmly in the experimental traditions within jazz\, improvisation\, and Western art music practices. Kaiser is currently Associate Professor of Music Technology and Composition at the University of Central Missouri. \nMore information at https://jeffkaiser.com/ \n  \nMinhi Kang: The Letter\nThe Letter is a work of consolation created using an FFT Channel Vocoder with Additive Synthesizer. \nHistorically\, the vocoder was developed during wartime to enable communication among allies. It reduces wideband speech to a narrower band for transmission and then reconstructs it at the receiver. In short\, a vocoder sends important words over distance and makes their faint traces audible again.\nAs a composer\, creating music is much the same. I keep listening to people and the world\, their voices. Then\, I compress\, interpret\, and reassemble those words in my own terms and offer them back as a piece.\nUnlike the vocoder’s original purpose\, in a time when war is no longer shocking news\, I wanted to use this technology to carry comfort. The lyrics come from a poem I wrote during my military service to endure a hard period (not in combat). This piece does not present a political agenda; it is a letter to anyone facing painful circumstances\, on any side\, in any degree. \nTechnically\, I aimed to design a vocoder with greater precision than a conventional channel vocoder. Instead of using bandpass filters\, I applied Fast Fourier Transform (FFT) analysis to collect more detailed and accurate amplitude information\, which allowed clearer rendering of vowel formants. This approach led to the creation of a Max for Live (M4L) FFT Channel Vocoder patch.\nI also developed an Additive Synthesizer M4L patch capable of producing a wide spectrum of sounds\, from pure sine waves to noise. When combined with the vocoder\, this synthesizer allows the clarity and harmonicity of speech to change according to the lyrics. Since the text relates to the transformation of light\, I used this Additive Synthesizer to achieve a tone painting that reflects those luminous changes. \nAbout the artist\nMinho Kang is a Korea-born composer and computer musician. His artistic interests\, which began in popular music and moved into contemporary music\, have expanded into electronic music at the intersection of technology and art. Drawing on introspective reflection and close observation of the world\, he brings diverse imaginings into his works.\nHis music has been presented at conferences and festivals including SEAMUS\, ICMC\, and the TurnUp Multimedia Festival. He completed his bachelor’s degree at Indiana University\, where he studied composition with Jeremy Podgursky\, Aaron Travers\, P. Q. Phan\, David Dzubay\, and Don Freund\, and electronic music with John Gibson and Chi Wang at the Center for Electronic and Computer Music. \n  \nEric Lyon: Moloch whose mind is pure machinery!\nAllen Ginsburg’s poem Howl was published in 1956\, the same year as the Dartmouth Summer Research Project on Artificial Intelligence. The two events portend seemingly incompatible futures that nonetheless are both with us now. A bursting forth of cultural chaos in an “armed madhouse” and the technocratic reduction of intelligence to code. Ginsburg’s poem’s ritualistic and repetitive rant about Moloch inspired this performance\, a tone poem that derives its sounds from two main sources – AI-generated music and the OB-Xd virtual analog synthesizer VST plugin manipulated using the Slewable Utility for Random Parameters (SLURP) designed by the composer. The performance interface consists of a Korg nanoKONTROL2 unit and the Google MediaPipe face landmarker. \nAbout the artist\nEric Lyon is a composer and audio researcher focused on high-density loudspeaker arrays\, dynamic timbres\, virtual drum machines\, and performer-computer interactions. His audio signal processing software includes “FFTease” and “LyonPotpourri.” He has authored two computer music books\, “Designing Audio Objects for Max/MSP and Pd\,” a guidebook for writing audio DSP code for live performance\, and “Automated Sound Design\,” a book that presents technical processes for implementing oracular synthesis and processing of sound across a wide domain of audio applications. He has written extensively about the possibilities of multichannel spatial audio. In 2016-17\, Lyon was guest editor for the Computer Music Journal on Volume 40(4) and 41(1) covering various aspects of High-Density Loudspeaker Arrays (HDLAs). \nIn 2015-16\, Lyon architected both the Spatial Music Workshop and Cube Fest at Virginia Tech to support the work of other artists working with HDLAs. In 2025 he co-created the Spatial Audio Tidepool to provide technical instruction for creative uses of high-density loudspeaker arrays. Lyon’s compositional work has been recognized with a ZKM Giga-Hertz prize\, MUSLAB award\, the League ISCM World Music Days competition\, and a Guggenheim Fellowship. Lyon teaches in the School of Performing Arts at Virginia Tech\, and is a Faculty Fellow at the Institute for Creativity\, Arts\, and Technology. \n  \nIlia Viazov and Nicola Leonard Hein: Tidal Unit for Sonic Activities\nPerformance-presentation of tusa (Tidal Unit for Sonic Activities). Tusa is a framework for Tidal Cycles live-coding environment that binds together different parts of the application in one Bash executable. It is an attempt to accomplish Tidal Cycles\, expanding it to a software DMI. It seeks to fulfill essential needs during performance with the environment\, keeping the setup very minimal yet sturdy\, while remaining modular and extendable. The framework allows the user access to the interpreter\, text editor\, reference window and server during live-coding practices.\nThe performance is aimed on live-coding improvisation with machine learning tools using spatialisation synthesis techniques. \nAbout the artists\nIlia Viazov (born in 1999 in Voronezh\, Russia) is a composer and sound artist working at the intersection of electronic music\, performance\, self-built instruments\, machine learning\, and software development. His personal and collaborative works have been presented at and supported by Ars Electronica Festival\, platformB Stuttgart\, and Darmstädter Ferienkurse. He is developing the framework tusa for Tidal Cycles live-coding environment\, a terminal implementation that allows the user run it locally\, fully interact with all parts of the environment and extend it. \nDr. Nicola Leonard Hein \n\nNicola Leonard Hein: Rhythmic Traces | Twisted Electronics\nThe piece Rhythmic Traces|Twisted Electronics deals with the question of how the integration of the body and skin resistance into the circuit of an analog synthesizer(Buchla Music Easel) and the connection with a machine learning-based musical agent system(SuperCollider) can change the tonal and rhythmic fluidity of the instrument and develop it beyond its limits. For this piece\, Nicola Leonard Hein uses a unique circuit-bending controller that completely alters the musical reading of the 1970s Buchla Music Easel. Furthermore\, he uses a multi-effect unit programmed in SC and realized with a Bela Microcomputer. Hein’s musical agent learns to interact musically\, creating the music in real time together with Hein on the synthesizer and developing the interaction between a human and a machine musical voice. The systemic economy of movement and the interaction with the AI musical agent create polyphonic rhythmic\, tonal\, and spatial structures. The piece focuses on the emergent Dances of Agency (Pickering). \nAbout the artist\nDr. Nicola L. Hein is a sound artist\, guitarist\, composer\, researcher\, programmer\, and professor of Sound Arts and Creative Music Technology at the University of Music Lübeck.\nHe works with A.I.-assisted human-machine interaction\, postdigital lutherie\, intermedia\, sound installations\, augmented reality\, network music\,and spatial audio. His works have been realised in more than 30 countries\, at festivals such as MaerzMusik Festival\, Sonica Festival\, Experimental Intermedia etc. \n  \nDong Zhou: Found Violin x Aromantic Hobby \nFound Violin is an improvisation system that treats the violin as just one of many sound objects. Since late 2024\, Dong Zhou has started to develop Aromantic Hobby\, a series of strap-on midi controllers. After a few prototypes\, the current controller features a bunny-shaped appearance and wirelessly transmits kinetic data from the wearer to control a chaotic synthesizer. With Found Violin played with the upper body and Aromantic Hobby on on lower body\, the musician plays a duo with themselves. \nAbout the artist\nDong Zhou is a composer-performer based in Hamburg. Zhou gained a B.A. in music engineering at the Shanghai Conservatory and an M.A. in multimedia composition at the Hamburg University of Music and Drama. Zhou won several prizes\, including the first prize in the 2018 ICMC Hacker-N- Makerthon\, the finalist of the 2019 Deutscher Musikwettbewerb\, the Nota-n-ear Award 2022\, and the shortlist of the 2025 Giga-Herz Pop Experimental Production Award. Zhou had works included in the ‘Sound of World’ Microsoft ringtones collection and was commissioned by festivals and institutions such as the Shanghai International Art Festival\, ZKM Karlsruhe\, Stimme X Festival\, etc. Zhou is currently a doctoral candidate in ICAM of Leuphana University. \n  \nOlivier Jambois : Tokens & Strings: an improvisation between an electric guitarist and a local LLM\nThis performance explores real-time co-creation between a human performer and a machine\, specifically investigating the improvisational capabilities of Large Language Models (LLMs) within a musical context. The project originates from an inquiry into the potential of using established LLM architectures —notably the one behind ChatGPT— as responsive improvisational partners. \nA primary challenge in this research is the nature of the LLM: as these models are designed for symbolic processing rather than direct audio generation\, the system must bridge the gap between acoustic signals and semantic analysis. An architecture was developed where the electric guitar’s audio is captured and processed to extract high-level audio descriptors. These descriptors are then sent to the LLM\, which analyzes the performer’s intent and generates a symbolic rhythmic response. This response is mapped to a drum sequencer controlling kick\, snare\, and hi-hat patterns.\nTo address the inherent risks of cloud-based APIs in a live performance environment—such as latency and connectivity instability—this work utilizes a local deployment. While local models often feature a smaller parameter count\, the system has been optimized through careful prompt design and constraint-based logic. This ensures a meaningful rhythmic dialogue while minimizing inference time\, achieving a critical trade-off between algorithmic complexity and real-time musical reactivity. \nIn this performance\, the generative drumming output is routed through a RAVE (Real-time Audio Variational auto-Encoder) module\, developed by IRCAM. By applying neural re-synthesis via a percussion pre-trained model\, the system transforms these source samples into complex\, evolving textures\, moving beyond static playback toward a more sophisticated timbral exploration. Throughout the improvisation\, the guitar signal is processed through custom-designed Pure Data patches\, creating a personal sonic language that oscillates between raw strings and highly transformed textures\, seeking a constant state of flux between contrast and blending with the machine-generated environment. \nAbout the artist\nOlivier Jambois is a guitarist\, composer\, and researcher working at the intersection of acoustic tradition\, analog electronics\, and digital innovation. He holds a PhD in Condensed Matter Physics and a Master’s degree in jazz and modern music\, a dual background that defines his analytical yet avant-garde approach to music.\nAfter winning the Jazz à Vienne national competition in 2012\, his debut Les Composantes Invisibles earned “Revelation” honors from Jazz Magazine. Since then\, he has contributed to over 16 albums on labels like Naïve and Underpool\, performing at major festivals including Nancy Jazz Pulsations and the Barcelona Jazz Festival.\nHis recent work bridges historical and cutting-edge technologies. A 2023 grant from the Generalitat de Catalunya supported his research into DIY magnetic tape echoes\, resulting in the solo album Self Tape Echoes (2024). In contrast\, his project Cosmogonies utilizes Pure Data on a Raspberry Pi for purely digital expression. He integrates these two worlds in live improvisations at festivals like LEM and MIRA. His 2025 release\, Eclosió\, featuring drummer Jim Black\, further establishes his influence in contemporary improvisation.\nAs a Professor and Researcher at ENTI-UB\, Jambois focuses on AI and generative systems. By synthesizing scientific methodology with contemporary creation\, he continues to push the boundaries of the electric guitar through custom-built hardware and computational tools. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/evening-concert-5b-lubeck/
LOCATION:Lübeck University of Music: Großer Saal\, Große Petersgrube 21\, Lübeck\, 23552\, Germany
CATEGORIES:15-05,Concert,Excursion to Lübeck,Music
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260516T110000
DTEND;TZID=Europe/Amsterdam:20260516T173000
DTSTAMP:20260423T153918
CREATED:20260421T183226Z
LAST-MODIFIED:20260423T124052Z
UID:10000188-1778929200-1778952600@icmc2026.ligeti-zentrum.de
SUMMARY:Listening Room 2
DESCRIPTION:A Voice Intolerable to Heaven and Earth\nFaming Qin \nCollapse\nVarun Kishore \nPaper Wreck\nChun-Han Huang \nRedDeadRouletteReconstruction\nNattakon Lertwattanaruk \nSingularity\nSilvia Matheus \nSpawn\nPaul Oehlers \nTekstil\nNayaka Adinata and Muhammad Welderahmat \nThe Unfinished Drum\nKeming Zeng \nThe Voice of the Tree\nYufen Qiu \nThe Lake Bell\n璟 李   \n 
URL:http://icmc2026.ligeti-zentrum.de/event/listening-room-2-5/
LOCATION:Hamburg University of Technology\, Building A (A 0.14)\, Am Schwarzenberg-Campus 1\, Hamburg\, 21073\, Germany
CATEGORIES:16-05,Listening Room,Music
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260516T110000
DTEND;TZID=Europe/Amsterdam:20260516T173000
DTSTAMP:20260423T153918
CREATED:20260421T190101Z
LAST-MODIFIED:20260421T190101Z
UID:10000179-1778929200-1778952600@icmc2026.ligeti-zentrum.de
SUMMARY:Listening Room 1
DESCRIPTION:Comfortable Distance\nGiovanni Crovetto \nDisappearing\nMatteo Tomasetti\, Francesco Casanova\, Andrea Veneri\, Vili Pääkkö and Andrea Strata \nHokkaido Snow Soundscape\nZiwei Yang \nImpulse Impromptu III\nTolga Yayalar \n4-body Interactions (7’34’’)\nLeonidas Spiliopoulos \nArchitecture éphémère\, acousmatic ambisonic piece\nNicola Giannini \nConcerto for Piano and Loudspeaker Orchestra\nNeal Farwell \nCorium II\nMathieu Lacroix \nGott\nRikhardur H. Fridriksson \nMatters 10\nDaniel Mayer \nOBSess\nAllison Ogden \nOscillation of Life\nJan Jacob Hofmann \nNoType\,  algorithmic 3D audio processing\nVilbjørg Broch Phe \nSonic Fragmentation – a fixed media multichannel piece\nDaniel Gomes \nCalling in “Raumforderungen” 8-channel diffusion work\nAleksandar Zecevic and Kiran Bhumber \nFluidante. A quadrophonic recording from the Latent Russando framework\nMartin Heinze \nOdradek\nCristian Gabriele Argento
URL:http://icmc2026.ligeti-zentrum.de/event/listening-room-1-5/
LOCATION:Hamburg University of Technology\, Building A (A 0.18)\, Am Schwarzenberg-Campus 1\, Hamburg\, 21073\, Germany
CATEGORIES:16-05,Listening Room,Music
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260516T133000
DTEND;TZID=Europe/Amsterdam:20260516T150000
DTSTAMP:20260423T153918
CREATED:20260421T163825Z
LAST-MODIFIED:20260423T124147Z
UID:10000105-1778938200-1778943600@icmc2026.ligeti-zentrum.de
SUMMARY:Lunch Concert 6A
DESCRIPTION:Concert 6A forms a bridge between the distant past and a radically digital future. It is a search for the edges of the audible—whether in the almost imperceptible silence of a saxophone\, in the raw 1-bit synthesis of early computer music pioneers\, or in the lament of the Gorgons on a reconstructed ancient instrument. \n  \nProgram Overview\nApparizione del Silenzio \nYisong Piao \ntibone\nMiller Puckette and Kerry Hagan \nA Hatful of Feathers \nMarc Ainger \nchime\nTiffany Skidmore and Patti Cudd \nCyanotypes\nPatti Cudd \nGorgons’ Cry\nKonstantinos Karathanasis \n  \nAbout the pieces & artists\nYisong Piao: Apparizione del Silenzio  \nApparizione Del Silenzio does not contain “silence” itself—at least\, not in the conventional sense of an absence of sound. Instead\, it is built upon sounds that lie at or beyond the threshold of human perception: vibrations outside the usual spectrum\, the friction between air and metal\, the dissipation of sound waves in space—those margins of sound that are ignored\, inaudible\, yet undeniably existent. The apparition of silence is therefore neither stillness nor emptiness\, but the manifestation of a presence perceived as silence. It is a non- sonic sound: at the limit of hearing\, silence ceases to signify absence and becomes another mode of existence.\nThe piece is written for tenor saxophone and electronics\, combining fixed media with live processing of hyper-amplified micro-sounds from the instrument. Semi-improvised passages invite the performer to enter the interstice between sound and silence\, where breath\, touch\, and hesitation become part of an almost inaudible voice.\nThe generative logic of the work is not the appearance of silence\, but its presentation: silence here is not what is conventionally called “silence\,” but a subject that reveals itself through its auditory traces. \nAbout the artist\nYisong Piao (b. 1992\, China) is a Seoul-based composer specializing in electroacoustic and instrumental music. His works have been presented at ICMC 2023 (China)\, ICMC 2024 (Korea)\, and ICMC 2025 (Boston). He is a researcher at the Center for Research in Electro-Acoustic Music and Audio (CREAMA)\, focusing on microtonality and algorithmic approaches in composition. \n  \nMiller Puckette and Kerry Hagan: tibone \nMiller Puckette and Kerry Hagan present an improvisation on 1-bit synthesizers. Rather than pursuing chip tunes or similarly low-bit music\, the duo navigates a range of possible timbres in an exploratory performance. \nAbout the artists\nMiller Puckette and Kerry Hagan began focused collaborations on academic and musical projects in 2014. Together their duo has performed in North America and Europe. They have introduced novel synthesis algorithms through new performances. Their work explores timbre\, spatialization\, real-time computer processes\, algorithms\, interaction design\, performance practice\, and performance systems. \n  \nMarc Ainger: A Hatful of Feathers\nIn A Hatful of Feathers for Alto Flute and Computer\, the flutist creates a music in realtime that is informed by expanded possibilities\, using traditional and extended techniques. The work builds from Willian Sethares’ research into spectra and tuning.\nThe computer analyzes the pitch\, amplitude\, and spectral content of the flute playing (including all of the sounds created by the mechanism of the flute\, such as the sound of the keys)\, interacting with the live sound in various ways (stretching/contracting and/or spatializing various spectra\, retuning spectra\, granulating and creating micro-glissandi\, etc). We use a custom Max/Msp patch using some well-known spectral and spatial techniques\, along with some extensions of these techniques. \nAbout the artist\nMarc Ainger (USA) has developed an idiosyncratic body of work that embraces a wide range of music/sound and music/sound-making. He is interested in the relationships between the real and the imagined – the ways in which the visceral world of sound and sound production inform our imagined worlds of sound\, and the ways our imagined worlds\, in turn\, inform our concrete experiences.\nPerformances of Ainger’s works have included the New York Philharmonic Biennial; the INA/GRM; the Royal Danish Ballet; CBGB; Late Night with David Letterman; the Goethe Institute; the American Film Institute; SIGGRAPH; the Palais de Tokyo (Paris); FolkwangWoche NeueMusik(Essen); Gaggego!(Gothenburg); the Joyce Theater (New York); Guangdong Modern Dance; and New Circus artists. Awards include the Boulez/LA Philharmonic Composition Fellowship\, the Irino International Chamber Music Competition\, Musica Nova Prague\, Meet the Composer\, and the Esperia Foundation. \n  \nTiffany Skidmore and Patti Cudd: chime\nPatti Cudd performs “chime\,” for percussion and fixed media\, composed for her by Tiffany M. Skidmore. “chime” requires 2 snare drums\, 6 crotales\, 12 distinctive beaters\, and 2 bluetooth bone conduction\, wireless speakers. Each speaker is affixed to the underside of one snare drum. All 6 crotales are placed on a single drumhead. The performer plays a complex series of patterns moving between bare drumhead and unmoored crotales using combinations of beaters. Mechanistic\, unpitched patterns begin to merge with melodic\, pitched elements that sometimes bend to ultimately become a metallic wall of overtones as the line between electronic and live acoustic sound comes into and out of focus. This piece was premiered by Cudd at the VT New Music + Technology Festival in May 2023\, ICMC represents the premiere of a revised version of the electronics and the first time Patti will use the bone conduction speakers that were originally intended for this piece. \n“chime” happens on three planes: a long\, liquidating chiasmus meets two rotating pitch constellations. \nAbout the artists\nComposer/Associate Director of the Mizzou New Music Initiative Tiffany M. Skidmore has held faculty positions at the University of Minnesota\, Virginia Tech\, and the University at Buffalo (SUNY)\, where from 2023-2024\, she held the Birge Cary Chair in Music Composition. In 2025\, she was Visiting Professor at McGill University\, in residence at the Centre for Interdisciplinary Research in Music Media and Technology. She is Co-Founder\, Executive Director\, and Artistic Director of 113\, producing the Twin Cities New Music Festival\, guest residencies\, and concerts throughout the world. \nDr. Patti Cudd is active as a percussion soloist\, chamber musician and educator. Patti is a member of the acclaimed new music ensemble\, Zeitgeist. Her other diverse performing opportunities have included CRASH\, the Minnesota Contemporary Ensemble\, Minnesota Dance Theatre and the Borrowed Bones Dance Theater.\nAs an active performer of the music of the 21st century\, she has given concerts and master classes throughout North America\, Asia\, Europe and South America. As a percussion soloist and chamber musician\, she has premiered well over 200 new works. \n  \nPatti Cudd: Cyanotypes \nCyanotypes\, with their characteristic white imprints on a deep blue field\, transcend mere photographic representation; they serve as blueprints that reveal the essence of objects through their negative form. This transformative process redefines the concept of the “object\,” not as a fixed entity\, but as an echo\, a trace\, or an imprint of presence. In this conceptual framework\, cyanotypes become a metaphor for the translation of physical and temporal phenomena into abstracted impressions. Inspired by this principle\, Cyanotype’s Five Studies approaches the vibraphone not through its direct sound or physicality\, but as a series of rhythmic imprints — sonic blueprints that capture the vibraphone’s articulate and resonant characteristics. \nThe vibraphone is renowned for its shimmering sustain\, dynamic control\, and ability to produce both melodic and percussive textures. In Cyanotype’s Five Studies\, these qualities are refracted through the instrumental language itself\, emphasising the vibraphone’s unique ability to articulate rhythmic patterns with clarity and tonal nuance. This work creates a rich sonic landscape for exploring how vibraphone rhythms can be abstracted\, deconstructed\, and re-imagined as imprints within sound. \nEach study acts as a sonic cyanotype\, distilling the essential rhythmic and timbral gestures of the vibraphone into textures that evoke the original instrument’s expressive potential without relying on straightforward replication. The vibraphone’s capacity for sustained tones and nuanced dynamic shading allows for a complex rendering of rhythmic articulation\, translating percussive strikes into lingering tonal shapes. The five studies function collectively as a blueprint series—each revealing different facets of the vibraphone’s character through a process of mediation\, exploring articulation\, rhythmic complexity\, timbral contrast\, and dynamic variation. \nBy conceptualising the work as an imprint rather than a direct transcription\, the piece invites listeners to reconsider the relationship between source and representation. It challenges traditional notions of musical interpretation by emphasising the transformative potential of the vibraphone to embody and reinterpret its own characteristic sound patterns. The blue-white dichotomy of the cyanotype process parallels the interplay between presence and absence in sound—notes articulated and decayed\, rhythm asserted and refracted\, the physical gesture and its sonic echo. \nUltimately\, Cyanotype’s Five Studies proposes a dialogue between visual and auditory art forms\, grounded in the shared concept of imprinting. Just as the cyanotype renders the visible object in reverse contrast\, this work explores how musical objects—rhythms and timbres—can be refracted through mediation to reveal new expressive dimensions. The vibraphone becomes both subject and medium\, transforming its distinctive voice into a series of articulate\, resonant imprints\, inviting a deeper engagement with the ephemeral nature of sound and the processes of artistic representation. \nAbout the artists\nPatti Cudd is an American percussionist\, educator\, and new-music advocate. A member of Zeitgeist and a professor at the University of Wisconsin–River Falls\, she specializes in contemporary percussion\, electroacoustic music\, and commissioning new works. Cudd has performed internationally\, recorded widely\, and collaborated with leading composers to expand the modern percussion repertoire. \nElainie Lillios is an American composer whose music explores sound\, space\, and the physical experience of listening. Her works often blend acoustic instruments with electronics\, field recordings\, and subtle timbral shifts. Lillios’s music has been performed internationally and is known for its immersive\, textural quality and imaginative use of resonance and sonic detail. \n  \nKonstantinos Karathanasis: Gorgons’ Cry\nThis programmatic composition is inspired by the 12th Pythian Ode\, written by Ancient Greek poet\, Pindar\, in honor of a formidable Aulos player. When Perseus\, aided by goddess Athena\, beheaded sleeping Medusa\, the only mortal of the three sister Gorgons\, the two immortal Gorgon sisters\, Stheno and Euryali woke up\, realized the crime and chased the culprit with terrible cries and laments. Athena listened to the Gorgons’ cries and created Aulos\, a double pipe-double reed wind instrument to imitate them.\nIn contrast to the ancient poet\, and profoundly stirred by ongoing contemporary reports of femicides\, the composer interprets this myth from a feminist perspective. Medusa is portrayed as a tragic victim of patriarchy\, and the Gorgons cry out in extreme anger\, mourning the lost beauty of their sister.\nIn modern days\, Archeomusicologists study fragments\, or entire pieces of excavated Auloi from various sites and eras to recreate exact replicas to learn more about the sounds and performing techniques of this long-lost instrument. This piece is based on a Pydna aulos\, an instrument entombed in Macedonia\, Greece at about the 2nd half of the 4th century BCE. Melodic materials derive from the archaic Spondeion scale that was used to accompany certain religious processions.\nThe computer alters the aulos sound in real-time based entirely on custom combinations of variable delay and FFT algorithms\, without using any prerecorded materials. Gorgons’ Cry is the first composition in the modern repertory involving aulos and live electronics. \nAbout the artists\nKonstantinos Karathanasis as an electroacoustic composer draws inspiration from modern poetry\, artistic cinema\, abstract painting\, mysticism\, Greek mythology\, and the writings of Carl Jung. His compositions have been performed at numerous festivals and received awards in international competitions\, including Musica Nova\, SIME\, SEAMUS/ASCAP\, Música Viva and Bourges. Recordings of his music are released by SEAMUS\, ICMA\, Musica Nova\, Innova\, Equilibrium and HELMCA. Ravello Records released in March 2026 his solo album Resonant Mythologies with the support of the University of Oklahoma. Konstantinos holds a Ph.D. in Music Composition from the University at Buffalo. He serves as Professor of Composition & Music Technology at the University of Oklahoma. More info at: http://karathanasis.org \nCALLUM ARMSTRONG is an award winning multi-instrumentalist specialized in Early Music. For over a decade\, Callum has devoted a great deal of his time to the revival of ancient Greek and Roman auloi. He has a YouTube channel\, the ”The Aulos Collective” which is dedicated to how auloi were made\, played\, and used\, in collaboration with the luthier Max Brumberg. Callum regularly performs internationally as a soloist\, in various ensembles\, and works as a composer\, teacher and session musician for film and computer games. Recently Callum was the subject of the documentary ‘Callum Armstrong the Aulete’ which won 1st price from the Ierapetra international film festival. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/lunch-concert-6a/
LOCATION:Hamburg University of Technology\, Building I\, Audimax 2\, Denickestraße 22\, Hamburg\, 21073\, Germany
CATEGORIES:16-05,Concert,Music
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
END:VCALENDAR