BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//ICMC HAMBURG 2026 - ECPv6.15.20//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:http://icmc2026.ligeti-zentrum.de
X-WR-CALDESC:Events for ICMC HAMBURG 2026
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:Europe/Amsterdam
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20250330T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20251026T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20260329T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20261025T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20270328T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20271031T010000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260510T193000
DTEND;TZID=Europe/Amsterdam:20260510T220000
DTSTAMP:20260505T121343
CREATED:20260421T081038Z
LAST-MODIFIED:20260503T183216Z
UID:10000070-1778441400-1778450400@icmc2026.ligeti-zentrum.de
SUMMARY:Opening Concert
DESCRIPTION:Please note: Since admission to the Elbphilharmonie is only possible with a ticket\, registration via Converia is required for the opening concert.\nThe opening concert is open to the public. Those without a conference pass can purchase a concert ticket here. \n\nProgram Overview\nIntroduction \nAlexander Schubert – SCANNERS (2013)\nfor string quintet\, choreography\, and electronics (12 min) \nNicole Brady – Ricochet (World Premiere 2026)\nfor chamber orchestra (10 min) \nAnthony Paul De Ritis – Filters (2015 / 2026)\nfor alto saxophone\, string orchestra\, and live electronics (10 min) \nIntermission (25 min) \nAigerim Seilova / Steffen Lohrey – Breath Mechanics (World Premiere 2026)\nfor two soprano saxophones\, string ensemble\, and live electronics (10 min) \nClarence Barlow – Im Januar am Nil (1985)\nfor ensemble (approx. 25 min) \nShort break (10 min) \nClosing & Conference Information (15 min) \n  \nPerformers\nEnsemble Resonanz – strings\nAsya Fateyeva – saxophone\nVlatko Kučan – saxophone\nJohn Eckhardt – double bass\nDulguun Chinchuluun – piano\nLin Chen – percussion \nConductor\nFriederike Scheunchen \nFind out more about the musicians playing at ICMC HAMBURG 2026 here.  \n  \nAbout the pieces\nAlexander Schubert: SCANNERS (2013)\nfor string quintet\, choreography\, and electronics \nThe piece SCANNERS copes with the physical qualities of instrumentalists in electro-acoustic music. It is a choreographed composition\, that takes movement as important as sound. The string ensemble turns into a performing machine. The main focus is on the movement of scanning – as well in the interaction of bow and instrument when producing sound as also in purely artificial gestures. There is no difference between musically necessary or choerographically determined mouvement. The piece can be seen as a comment on the relationship of man to digital content: the direct consequences of action can’t be explained by simple cause and effect principles any more\, the musicians become puppets or at least a part of a complex machine. At the same time the piece offers a special focus on the highly specialized genre of the string orchestra: the mechanizing emphasizes the accuracy of the interpreter and the elegance of the traditional movement\, here being staged independently from the production of sound.\nScanners belongs to a series of compositions that deal with physicality\, as there is e.g. Point Ones with interactive conductor or LaPlace Tiger with a sensor-wired drummer. \nAbout the composer\nAlexander Schubert (1979) studied bioinformatics\, multimedia composition. He’s a professor at the Musikhochschule Hamburg. Schubert’s interest explores the border between the acoustic and electronic world. In music composition\, immersive installation and staged pieces he examines the interplay between the digital and the analogue. He creates pieces that realize test settings or interaction spaces that question modes of perception and representation. Continuing topics in this field are authenticity and virtuality. The influence and framing of digital media on aesthetic views and communication is researched in a post-digital perspective. Recent research topics in his works were virtual reality\, artificial intelligence and online-mediated artworks. Schubert is a founding member of ensembles such as “Decoder“. His works have been performed more than 700 times in the last few years by numerous ensembles in over 30 countries. \n  \nNicole Brady: Ricochet (World Premiere 2026)\nfor chamber orchestra and live electronics \nRicochet explores the idea of deviation from an expected path after an initial impact\, leading to new directions. Inspired by the ricochet bowing technique\, this concept unfolds both physically and metaphorically within the ensemble.\nA responsive electronic system listens to the orchestra and generates a parallel sonic layer. Energetic passages produce scattered\, percussive textures\, while quieter material leads to dense\, sustained sound fields. The system alternates between listening and generative modes\, interacting closely with the performers.\nSubtle references to composers such as Couperin\, Ravel\, and Mozart connect historical material with contemporary sound\, while the electronics act as an additional\, autonomous voice within the ensemble. \nAbout the composer\nNicole Brady is an award-winning composer and creative director whose work spans concert music\, immersive installation\, and video game franchises including Final Fantasy\, Tekken\, and Valkyria Chronicles. Her work has been honoured by the Peabody Awards and IndieCade\, and her immersive sound album Lost Palace was released with the Royal Scottish National Orchestra. Recent commissions and performances include the Omega Ensemble\, Melbourne Symphony Orchestra\, Flinders Quartet\, and Lyris Quartet. As creative director of WLDR studio\, her immersive multisensory works have reached over 20\,000 participants across Illuminate Adelaide and Spier Light Art Festival. Nicole is a researcher at the Melbourne Conservatorium of Music and recipient of the Director’s Award for Exceptional Doctoral Research. \n  \nAnthony Paul De Ritis: Filters (2015 / 2026)\nfor alto saxophone\, string orchestra\, and live electronics \nOriginally composed for alto saxophone and electronic playback\, Filters explores the layering and spatial diffusion of sound. Recorded saxophone material creates a “second” voice\, blending with the live soloist into a unified\, resonant field.\nIn this version for saxophone\, string orchestra\, and multi-channel electronics\, the ensemble extends these layers\, producing a rich interplay between live instruments and their electronically mediated “shadows.”\nThe solo saxophone remains at the expressive center\, while the surrounding textures generate depth\, movement\, and an immersive spatial experience. \nAbout the composer\nDescribed as a “genuinely American composer” (Gramophone)\, “a bit of a visionary” (Audiophile Audition)\, and “bracingly imaginative” (The Boston Globe)\, Anthony Paul De Ritis has received performances around the world\, including at Lincoln Center\, Beijing’s Yugong Yishan\, Seoul’s KT Art Hall\, the Italian Pavilion at the 2015 World Expo in Milan\, and UNESCO headquarters in Paris. \nDe Ritis’s 2012 release “Devolution” by the GRAMMY® Award-winning Boston Modern Orchestra Project\, featuring Paul D. Miller aka DJ Spooky as soloist\, was described as a “tour de force” (Gramophone); and his “Pop Concerto” (2017) featuring Eliot Fisk was lauded as “a major issue of American music\,” (Classical CD Review) and his “Electroacoustic Music – In Memoriam: David Wessel” (2018) was cited as among the “Best of 2018” in the electronic music category (Sequenza 21). \nHe holds a Ph.D. from the University of California\, Berkeley\, and is Professor at Northeastern University\, where he co-founded the music technology program. \n  \nAigerim Seilova and Steffen Lohrey: Breath Mechanics (World Premiere 2026)\nfor two soprano saxophones\, string ensemble\, and live electronics \nThis work is a composition for two soprano saxophones\, string ensemble (4.4.4.2)\, and 8.1 live electronics\, submitted for the ICMC Special Call 1: Ensemble Resonanz . The piece serves as a spectral dialogue with Clarence Barlow’s Im Januar am Nil\, adopting his strategies of timbral fusion and hocketing but transposing them into the age of Machine Learning. The central material is derived from “ChordsNest\,” a multiphonics palette extension for MaxScore\, which is repurposed here as a training set for a neural network. The compositional core is an “AI Translation Error” in which the model was tasked with reconstructing the cylindrical bore spectra of the digital archive using the conical bore of the live saxophones and the acoustic textures of the string ensemble. \nThe resulting score is a transcription of the AI’s “hallucinations\,” where the ensemble physically replicates the digital artifacts of the style transfer process. The 8.1 electronics mediate this through a dual-role feedback loop. They function first as a synthesized “externalized memory” of the source spectra and secondly as a live inferencing engine that generates “retrospective hypotheses” by attempting to recover source-states from the acoustic performance. This architecture stages a recursive friction between the explicitly presented digital archive and the machine’s error-prone attempt to reconstruct it through physical sound. \nAbout the composers\nHamburg-based composer Aigerim Seilova integrates acoustics\, electronics\, and interactive media. A doctoral researcher at HfMT Hamburg\, her works are performed by Ensemble Modern and the Norwegian Radio Orchestra at festivals like Tanglewood and Chelsea Music Festival. Awards include the Hindemith Prize\, Leonard Bernstein Fellowship\, and Radio France Prize. She serves as Deputy Chair of the DKV Hamburg\, promoting contemporary music and interdisciplinary exchange. \nBorn in Gießen in 1987\, Steffen Lohrey studied Digital Media with a focus on sound in Darmstadt and Multimedia Composition at the Hamburg University of Music and Drama (HfMT Hamburg). His work exists at the intersection of composition\, installation\, and code. He has been involved in a wide range of projects\, including Picadero with the Haa Collective (presented at venues such as Deltebre Dansa and the Fusion Festival)\, Crawlers with Alexander Schubert (ZKM Karlsruhe)\, and Shibboleth by Aigerim Seilova at HfMT Hamburg. His work and collaborations have been featured at Blurred Edges\, the Teatre Principal Terrassa\, and the GREC Festival\, among others. In addition\, Steffen Lohrey works as an audio engineer and sound designer in Hamburg. \n  \nClarence Barlow: Im Januar am Nil (1984)\nfor 2 soprano saxophones (1st+clarinet\, bass clarinet)\, 4 violins\, 2 celli\, double bass\, piano\, percussion  \nIm Januar am Nil was written in 1981 for Ensemble Köln – the instrumentation: two soprano saxophones\, percussion (five Japanese temple bells\, a Korean gong\, a crotale\, a cymbal\, a side drum and a bass drum)\, a piano\, four violins\, two cellos and a double-bass. In 1984 the completely revised piece was premiered in Paris by Ensemble Itineraire.\nThrough the piece runs a constantly repeated melody\, increasing both in length and density – new tones appear in the expanding gaps\, first in a purely auxiliary function\, but gradually harmonically rivalling the older tones. A single note at the start develops into a flowing melody moving from transparent tonality through multitonality to a dense self-destructive atonality.\nAt first the melody is played almost inaudibly by the bass clarinet\, amplified by overtones heard as natural harmonics in the strings: the resultant timbre is phonetic\, based on a Fourier analysis of German sentences (as for instance the title itself) containing only harmonic spectra\, namely liquids\, nasals and semi-vowels. Ideally these “scored Fourier-synthesized” words should be comprehensible\, but an ensemble of seven strings can only be approximative. After a few minutes of bass clarinet and strings\, the piano enters in an explicit rendition of the melody\, developing it as described above and timbrally coloured by “hocketing” soprano saxophones. The double bass now also explicitly plays the melody without further developing it – in a “frozen” state it is contrasted with the piano part and slows down during further repetitions due to its increasing length. \nAbout the composer\nClarence Barlow (1945–2023) was a composer and pioneer of computer music\, born into the English-speaking minority of Calcutta (now Kolkata)\, India. He received his early education there\, studying piano\, music theory\, and natural sciences\, and began composing at the age of twelve. After graduating in science from the University of Calcutta in 1965\, he worked as a conductor and teacher of music theory at the Calcutta School of Music.\nIn 1968\, Barlow moved to Cologne\, where he studied composition and electronic music at the Hochschule für Musik\, alongside studies at the Institute of Sonology in Utrecht. During this period\, he began using computers as a compositional tool\, becoming one of the early figures to explore algorithmic and computer-assisted composition.\nFrom the 1980s onward\, Barlow played a central role in shaping the field of computer music. He was closely associated with the Darmstadt Summer Courses\, where he directed computer music activities for over a decade\, and was a co-founder of GIMIK (Initiative Musik und Informatik Köln). He also held numerous academic positions across Europe\, including at the Royal Conservatory in The Hague\, where he served as Professor of Composition and Sonology and later as Artistic Director of the Institute of Sonology.\nFrom 2006 until his retirement\, Barlow was Corwin Professor of Composition at the University of California\, Santa Barbara. His work is characterized by a unique synthesis of mathematical rigor\, cultural hybridity\, and innovative approaches to musical structure\, making him one of the most distinctive voices in contemporary music. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/opening-concert/
LOCATION:Elphilharmonie Hamburg\, Recital Hall\, Platz der Deutschen Einheit\, Hamburg\, 20457\, Germany
CATEGORIES:10-05,Concert,Music,Special Event
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260511T110000
DTEND;TZID=Europe/Amsterdam:20260511T173000
DTSTAMP:20260505T121343
CREATED:20260421T181209Z
LAST-MODIFIED:20260504T080556Z
UID:10000184-1778497200-1778520600@icmc2026.ligeti-zentrum.de
SUMMARY:Listening Room 2
DESCRIPTION:Fixed Media: Program Overview\n430-+\nAyako Sato \nLunar Current\nChufan Zhang\, Jun Wang and Qi Liu \nSawa\nAkiko Hatakeyama \nTake Me Back to Indonesia\nBoyi Bai \nVentward\nEd Osborn \nWoody\nAdrian Kleinlosen \nZen to Hearth\nYu Linke \n  \nAbout the pieces & artists\nAyako Sato: 430-+\nThe fundamental pitch of the 15th bamboo tube of the Shō\, “kotsu\,” corresponds to the current standard pitch of 430Hz in Gagaku. This acousmatic piece involves listening to 430Hz\, its harmonics\, the sounds that deviate from it\, and unreliable text about the Shō generated by AI. Perhaps. \nSho performance: DEGUCHI Miki \nAbout the artist\nAyako Sato is a composer\, musician\, artist\, and researcher working mainly in the field of electroacoustic music. Her works have been presented at international conferences and festivals (ICMC\, SMC\, NYCEMF\, ISMIR\, WOCMAT\, etc.) and won awards in international competitions (Prix Presque Rien\, Destellos Competition\, International UPISketch Competition\, etc.). She received her Ph.D. from Tokyo University of the Arts in 2019 for her research on Luc Ferrari’s works. After working as a part-time lecturer at Tamagawa University\, Osaka University of Arts\, Tokyo Denki University\, and Shobi Music College\, she is a lecturer at Shizuoka University of Art and Culture starting April 2025. \n  \nChufan Zhang\, Jun Wang and Qi Liu: Lunar Current\n“The ripples of moonlight surge and finally settle into stillness in the current. The trembling of electronic waves all find their peaceful end in the moonlit night.” – The pulses of electronic sound eventually merge into the gentle waves of moonlight\, just as the surges of electric current fade into the breath of the night. This work takes electronic waveforms simulating electric current as its core sound material. Through modulation and filtering processing in a digital audio workstation\, it employs techniques such as synthesizer wave shaping\, ambient reverb stacking\, and low-frequency oscillation to create auditory characteristics that blend the texture of electric current with the haziness of a moonlit night. Lunar Current is an immersive auditory experience. It attempts to capture not the moonlight itself\, but the sensory critical state where the quiet night and electronic current intertwine. At this moment\, the technological rhythms of electronic sound and the ethereal silence of the moonlit night together construct a gentle echo of a whispered conversation with the starry night. \nAbout the artists\nChufan Zhang (born in July 2006) is a sophomore at the Communication University of Zhejiang\, and also a young creator who delves into the fields of creative design and blockchain applications. Her representative works include Xuan and Mo Zang. Among them\, Xuan won the second prize in the East China Division of the National University Students Blockchain Competition\, and Mo Zang was awarded the third prize in the Future Designer Competition. During her studies at the university\, she not only won the first-class scholarship of the university but also was awarded the titles of “Merit Student” and “Outstanding Social Worker”\, demonstrating solid professional skills and cutting-edge innovative thinking in both academic research and competition practice. \nJun Wang  \nQi Liu \n  \nAkiko Hatakeyama: Sawa\nIt’s neither close nor far\, neither happened nor never happened. This is a short piano-and-electronics piece that captures a moment in an unfamiliar place. \nAbout the artist\nAkiko Hatakeyama is a composer\, performer\, and artist of electroacoustic music and intermedia. Akiko’s research focuses on realizing her ideas of relations between the body and mind into intermedia works\, often in conjunction with building customized instruments/interfaces. It is a form of nonverbal communication with her inner self and with the environment\, including the audience. Expression through sounds and performance brings her therapeutic effects\, helping her process memories and trauma. Her work has been presented internationally at various venues and festivals in the U.S.A.\, Canada\, Chile\, England\, Ireland\, Portugal\, New Zealand\, China\, South Korea\, and Japan. Selected awards include the Best Performance Award at the NIME International Conference\, the winner of the Audio-Visual Composition at the ICMA Showcase: Asia\, the George A. and Eliza Gardner Howard Foundation Fellowship\, and the MacDowell Fellowship. Akiko obtained her B.A. in music from Mills College and M.A. in Experimental Music/Composition at Wesleyan University and completed her Ph.D. in the MEME program at Brown University. Her mentors include Alvin Lucier\, Anthony Braxton\, Ronald Kuivila\, Maggi Payne\, Chris Brown\, John Bischoff\, James Fei\, and Butch Rovan. She is currently an associate professor of Music Technology at the University of Oregon. \n  \nBoyi Bai: Take Me Back to Indonesia\nThis work is rooted in a field recording made in Madobag Village\, Mentawai Islands\, Indonesia\, capturing children playing near an old well. As a sonic memory\, it inspired the composer to reflect on the contrast between fleeting moments of travel serenity and the pressures of everyday life. The work explores the tension between two acoustic worlds. It opens with the calm of the island\, employing gentle drones and textures to construct a dreamlike space between the external environment and internal memory\, reimagining how memories emerge in times of longing. Sharp phone alarms and daily noises then shatter this tranquil soundscape\, marking the collapse of the imagined realm. In the end\, the work maintains an open\, unresolved narrative tension\, oscillating between memory and the present. \nAbout the artist\nBoyi Bai is a composer and sound artist specialising in field recording\, soundscape composition and interactive VR spatial audio\, whose practice-led works transform environmental sound into immersive auditory spaces while exploring the intrinsic relationships between place\, memory and media. His works have been widely presented at internationally acclaimed festivals\, art exhibitions\, and radio programmes\, including BBC Radio 6\, TagTEAMS 2026\, MA/IN Festival\, SOUND/IMAGE Festival\, MANTRA\, PAYSAGES | COMPOSÉS Festival\, and the San Francisco Tape Music Festival\, building an extensive exhibition profile in the global fields of sound art and electroacoustic music. His distinctive artistic approach has been recognised with the Gold Award in the Electronic Acousmatic Music category at the 6th Denny Awards Electronic Music Competition\, a shortlist for the Sound of the Year Awards 2024\, and other internationally recognised professional honours. \n  \nEd Osborn: Ventward\nVentward is built from recordings of several performances using tabletop guitar and electronics which were edited into a single work. It explores a series of sound states to produce a shifting and evolving cluster of sound\, one that gradually expands its tonality and frequency range. As it does so it focuses on distilling the acoustic field down to its core textures of processed and re-processed sounds. The piece also explores a structural space that exists between live improvisation and studio composition. \nAbout the artist\nEd Osborn (1964) works with many forms of electronic media including installation\, video\, sound\, and performance. He has presented his work at the San Francisco Museum of Modern Art\, the singuhr-hörgalerie (Berlin)\, the Berkeley Art Museum\, Artspace (Sydney)\, the Institute of Modern Art (Brisbane)\, the ZKM Center for Art and Media (Karlsruhe)\, Kiasma (Helsinki)\, MassMOCA (North Adams)\, the Yale University Art Gallery\, and the Sonic Arts Research Centre (Belfast). Osborn has received grants from the Guggenheim Foundation\, the Creative Work Fund\, and Arts International and been awarded residencies from the DAAD Artists-in-Berlin Program\, the Banff Centre for the Arts\, Elektronmusikstudion (Stockholm)\, STEIM (Amsterdam)\, and EMPAC (Troy\, NY). He is Professor of Visual Art and Music at Brown University. \n  \nAdrian Kleinlosen: Woody\nSound synthesis and spatialization generated with Csound\, voices with espeak-ng\, mixed in Pro Tools. Text based on a dialogue from a famous movie. \nAbout the artist\nAdrian Kleinlosen is a composer working with instrumental\, vocal\, and electronic music. His work focuses on structure\, rhythm\, and form\, often based on the superposition of independent musical layers and processes rather than linear development. Questions of temporal organization and formal articulation play a central role in both his acoustic and electronic works. In his electronic music\, Kleinlosen composes algorithmically\, using a range of software environments and programming languages. Computational tools are integral to his compositional thinking and are used to design musical structure\, temporal processes\, and formal relationships across different media. Kleinlosen holds degrees in composition and musicology and received a doctorate (Dr. phil.) for research on musical structure and form in contemporary music. In addition to his compositional work\, he has been active as an educator and lecturer in composition\, music theory\, and artistic research. \n  \nYu Linke: Zen to Hearth\nThis piece uses temple bells as the core sampling material\, with the theme of creating an auditory journey from spiritual seclusion to facing reality. “Zen” represents spiritual seclusion\, while “Hearth” represents the mundane hustle and bustle of the world. The original intention is to escape from reality and construct an ideal world. At the beginning\, the clear bell ringing\, accompanied by minimalist electronic tones\, unfolds\, depicting a secluded ideal world of Zen\, where the creator briefly withdraws from the chaos of the mundane world and escapes. As the melody progresses\, the echoes of the bells gradually weaken\, and concrete electronic rhythms and low-frequency textures gradually enter\, symbolizing that the ideal Zen space is gradually penetrated by the reality of the world. The two sound elements interweave in the music to express the mutual integration and non-contradiction of ideals and reality. As the piece approaches its end\, the bells serve as the background\, blending with the rhythmic movement of the realistic clock\, expressing that the chaotic time elements in reality struggle within the atmosphere of the ideal world\, disorder eventually returns to calmness in the temple bells\, highlighting the transformation from “Zen” (spiritual seclusion) to “Hearth” (mundane hustle and bustle) – escape is not the ultimate answer; the reconciliation of ideals and reality is the focus of this auditory narrative. \nAbout the artist\nYu Linke\, born in August 2004\, is currently a third-year undergraduate student majoring in Music Sound Direction in the Composition Department of Wuhan Conservatory of Music. In 2023\, she was admitted to the university with the top score in her major\, focusing on academic practice in composition creation and sound engineering. During her time at school\, her research and practical achievements have covered professional composition competitions and interdisciplinary technology contests. She has successively won the school-level composition award\, the first-class scholarship\, and two second prizes in provincial competitions\, demonstrating solid academic accumulation and outstanding innovative practical ability in the intersection of composition art and sound technology. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/listening-room-2-1/
LOCATION:Hamburg University of Technology\, Building A (A 0.14)\, Am Schwarzenberg-Campus 1\, Hamburg\, 21073\, Germany
CATEGORIES:11-05,Listening Room,Music
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260511T110000
DTEND;TZID=Europe/Amsterdam:20260511T173000
DTSTAMP:20260505T121343
CREATED:20260421T183941Z
LAST-MODIFIED:20260504T083401Z
UID:10000183-1778497200-1778520600@icmc2026.ligeti-zentrum.de
SUMMARY:Listening Room 1
DESCRIPTION:Fixed Media | Program Overview\nAxis of Frost\nLiuyang Tan \nVeil-Audiovisual performance with real-time motion detection by Media Pipe\nYiting Shao \nEbow Supernova\nCristiano Riccardi \nOkinawa Blue Note\, Recalled\nYerim Han \nQuantum Sphere & Sound Sympathy — Composed for Guzheng and Quantum Computing\nWeijia Yang \nThe Orphic Shimmer onto the 192 Steps\nWanjun Yang \nTranscendence: Performance without Presence\nJinwoong Kim \nTriangulation\nTalia Amar \nWhispers That Are Heard\nJingfan Guo \nLabyrinthe Souriant (Smiling Labyrinth)\nShih-Lin Hung and Ju An Hsieh \nEchoes of the dial\nYunpeng Li \nAbout the pieces & artists\nLiuyang Tan: Axis of Frost\nAxis of Frost is the fourth movement of the electronic music suite Four Seasons Soundscapes. Drawing inspiration from the microscopic dynamics of ice and snow\, the composer employs wind chimes\, gears\, and metallic collisions as primary sound materials. Through the interweaving of pulsating rhythms and howling cadences\, the work evokes a frigid soundscape of crystallizing snowflakes\, swirling ice particles\, and surging glacial undercurrents.\nThis piece is supported by the Music and Digital Intelligence Key Laboratory of Sichuan Province. \nAbout the artist\nLiuyang Tan is a graduate student in the Music Engineering Department of Sichuan Conservatory of Music\, where he studies electronic music composition with Professor Lu Minjie.He will begin his PhD studies at UC Santa Barbara in fall 2026. He is a member of EMAC (Electroacoustic Music Association of China). His research focuses on inter-media composition of electroacoustic music. His works have won prizes and been selected for presentation in international musical activities\, including MUSICACOUSTICA-HANGZHOU\, ICMC (Ireland\, China\, South Korea\, America\, Germany)\, China Computational Art Conference\, SOMI Electronic Music Marathon\, Earth Day Art Model\, the ArteSicenza Festival in Rome\, International Electronic Music Competition (IEMC\, Shanghai)\, SEAMUS\, New York City Electroacoustic Music Festival\, and Macao International Digital Intelligence Music Competition. \n  \nYiting Shao: Veil-Audiovisual performance with real-time motion detection by Media Pipe\nThis work employs real-time motion capture of the dancer to generate audiovisual elements in parallel. It is inspired by The Painted Veil by W. Somerset Maugham.\nI. Time and again\, a veil is woven around oneself\, until the original self is forgotten.\nII. The moment the veil is lifted comes only after a long and painful struggle.\nIII. Through repeated loss and searching\, one is left to wonder—beneath the veil\, is this the true self? \nAbout the artist\nYiTing Shao\, born in Hebei\, China\, in 2000\, Received a Bachelor’s degree in Violin Performance from Communications University of Zhejiang in China\, and completed a Master’s degree in Composition at Dankook University in Korea. Currently pursuing a Doctorate in Electro-acoustic and Instrumental Composition at Hanyang University.Work was presented at the 2025 International Computer Music Conference (ICMC) in Boston2025. Performer:XINRAN XU\,Liaoyang\, Liaoning Province\, China Xinran Xu is a dancer and choreographer trained in both street and contemporary dance. She graduated from Beijing Modern Music Academy and Dankook University. She won 1st Place at Hip Hop International (Beijing Regional) and received the Gold Prize in Contemporary Dance at the 6th C-DAK International Dance Competition (2025). She also competed in World of Dance\, Disco Connection\, and Danceholic.She has worked as a choreographer and performer in multiple showcase performances and appeared in the dance program “Ttechum (떼춤)”.Currently\, she is active in Korea as a member of Blue Dance Theater 2\, ISSUE Dance Crew\, and Sparky. Her work focuses on the fusion of street and contemporary dance. \n  \nCristiano Riccardi: Ebow Supernova\nThis audiovisual work proposes a phenomenological investigation of interior space through the sensible representation of a cosmic event: the unfolding of a supernova as both metaphor and device for the alteration of corporeal consciousness. This work proposes an experience of corporeal subtraction\, the progressive dissolution of the body’s boundaries\, the indifferentiation between subject and object. Through sonic rarefaction and luminous beams\, the work induces a meditative state that reconfigures the relationship between spectator and cosmic matter. This is not mere contemplation\, but rather an interpenetration with the intensities that constitute reality itself. The interior journey becomes indistinguishable from the journey through cosmic spaces: both experience the same phenomenon of rarefaction\, illumination\, and the attenuation of boundaries. On a phenomenological plane\, the supernova represents the unveiling of what is hidden—not as a remote event\, but as an intimate revelation of the luminosity that constitutes our own materiality. The listener experiences a form of dilated consciousness\, where the awareness of being part of a force greater than oneself becomes the corporeal experience of one’s own dissolution. The musical and visual rarefaction operates an ascesis from the domain of the speakable and the representable\, leaving pure intensity and openness toward the unsaid—a liminal space where the microcosm of interiority and the macrocosm of stars interpenetrate without boundaries. The composition is structured around twelve independent chromatic lines derived exclusively from samples of an ebowed guitar\, mapped into a custom-built synthesizer that preserves the instrument’s characteristic infinite sustain. Organized into four registral groups (three sopranos\, three altos\, three tenors\, three basses)\, the voices operate as parallel streams converging and diverging through close semitonal proximity\, generating dense harmonic clusters. Staggered entrances and overlapping durations create gradual transformations of harmonic density\, privileging timbral evolution over melodic narrative. The visual component translates each musical line into concentric circles responding in real time to amplitude variations\, creating a dynamic field of overlapping geometric forms that reflect sound-wave propagation and harmonic density. By foregrounding chromatic density\, sustained sonority\, and visual abstraction\, Ebow Supernova proposes an immersive experience in which individual elements dissolve into a unified perceptual field—interrogating the contemporary paradigm of corporeality and suggesting that the deepest contact with reality might paradoxically consist in the negation of the biological body: a journey toward the luminosity that traverses and transcends it. \nAbout the artist\nCristiano Riccardi is a multi-instrumentalist and sound designer with over 30 years of experience in live and studio practice. His recent work spans recording Fausto Razzi’s Memoria (2020) and Lontano (2021)\, performing Razzi’s scenic piece Protocolli (2023)\, arranging Stockhausen’s Tierkreis (2025\, awarded for interpretation)\, and contributing to an intermedial reworking of Stravinsky’s L’Histoire du Soldat. He is currently pursuing a Master’s in Electronic Music at the Conservatorio di Santa Cecilia in Rome\, focusing on electroacoustic composition and real-time performance. \n  \nYerim Han: Okinawa Blue Note\, Recalled\nThis audiovisual fixed media work is based on recollected memories following a trip to Okinawa and a subsequent viewing of the film Okinawa Blue Note. Using sound materials extracted from travel videos\, the piece explores how memory—already shaped and idealized through recollection—is further manipulated and restructured over time. Conceived as a dive into memory\, water functions as a medium that distorts and contains remembrance\, while layered and transformed sounds construct an emotional landscape of mediated recall. \nAbout the artist\nYerim Han (b. 1997\, South Korea) is a composer currently pursuing a Master’s degree in Composition at Hanyang University. Trained in contemporary acoustic music\, she is also actively engaged in MIDI-based composition\, electronic music\, and commercial music practices. Her work explores diverse musical languages across acoustic and digital media. \n  \nWeijia Yang: Quantum Sphere & Sound Sympathy — Composed for Guzheng and Quantum Computing\nThis work takes classic guzheng music as the creative foundation and relies on an independently developed quantum synthesizer interactive system to construct a cross-temporal dialogue between “classical artistic conception” and “quantum timbre”. Its submitted version is an audio-visual hybrid version developed based on the Touch Designer visual effects port\, while the live performance version can be connected to real-time live instrumental performance\, realizing a complete closed-loop performance of “gesture — quantum sound — instrument”. The guzheng melody is processed through quantum gate algorithms to be transformed into electronic sounds with the characteristics of quantum superposition state. Meanwhile\, it relies on a real-time visualization engine to generate dynamic images of quantum Bloch spheres and particle flows\, ultimately constructing an immersive audio-visual integrated experience. Inspired by High Mountains and Flowing Water of the Shandong Guzheng School\, this work inherits its skeletal structure and core backbone notes\, and innovatively reshapes the musical form through quantum timbre\, presenting a transformation path from traditional art to future media art. \nAbout the artist\nWeijia Yang. Ph.D\, Full-time Postdoctor at Shanghai Conservatory of Music. Currently\, he holds multiple academic appointments\, including Excellent Innovation and Entrepreneurship Tutor for Shandong Province’s “Internet Plus” Program\, Member of the Institute of Electrical and Electronics Engineers (IEEE)\, Member of the Chinese Association for Artificial Intelligence (CAAI)\, Member of the Electronic Music Society of the Chinese Musicians Association\, and Reviewer for 8 A-class core journals indexed by SCI/SSCI (such as PLOS ONE and Frontiers in Psychology). He has published 8 core papers indexed by SCI\, SSCI\, EI\, Scopus\, and Peking University Core (PKU Core) of China\, as well as numerous non-core journal papers\, obtained 3 Software Copyrights\, and served as Principal Investigator or Key Participant in 12 research projects at national\, provincial\, and municipal levels. He has mentored 6 national and provincial A-class innovation and entrepreneurship projects that received funding and awards. Additionally\, he has composed over 10 representative electronic music works (e.g.\, Nine-Colored Deer)\, which have been released on major music platforms; his works have won multiple awards and been performed in numerous exhibitions at international competitions both\ndomestically and internationally\, such as ICMC (International Computer Music Conference) and WOCMAT (World Conference for Chinese Composers). \n  \nWanjun Yang: The Orphic Shimmer onto the 192 Steps\n“The Orphic Shimmer onto the 192 Steps” is an interactive live-coding audio-visual performance that explores the role of art as a “harmonizing force” within the turbulent landscape of contemporary civilization. The work takes its title from the 192 steps of the Odessa Staircase\, abstracting this historically and cinematically significant site into a topological space of tension and dispersion. By invoking the myth of Orpheus – the figure who restored order through music – the piece builds a philosophical bridge between classical humanitarian ideals and modern algorithmic logic. \nTechnical Framework \nThe work is built on a sophisticated integration of live coding\, modular synthesis\, and generative visuals:\n* Audio Synthesis: Primary sound design is executed in VCV Rack\, employing a hybrid of subtractive\, wavetable\, and granular synthesis. A foundational layer of algorithmically generated Shepard Tones creates an auditory illusion of “infinite ascent\,” symbolizing the cyclical pain and progress of history.\n* Live Interaction: Sonic Pi serves as the central engine for real-time algorithmic restructuring. The performer uses MIDI controllers to manipulate the density and spatialization of the sound field\, facilitating a dialogue between rigorous code and human intuition.\n* Visual Generative Design: Developed in Processing\, the visual layer utilizes the OSC protocol for sample-level synchronization. Spectral energy and transient parameters from the audio drive fluid\, geometric “shimmers” that map onto the metaphorical 192 steps. \nAbout the artist\nWanjun YANG is an engineer\, programmer\, sound designer\, researcher and electronic music musician. Now he is an associate professor of Music Engineering Department\, Sichuan Conservatory of Music. In the past 26 years\, he lives at Chengdu City\, Sichuan Province\, Southern of China\, and taught at Sichuan Conservatory of Music. His research and creative interests lie in Acoustics and Psychoacoustics\, Sound Design\, Software Developing\, New Media Art\, Multimedia Design. Since 2011\, he attended the EMS Annual in New York\, followed by participation in an electronic music exchange at the University of Oregon in 2012; in 2017\, his work was selected for ICSC 2017 in Nagoya and his paper selected for ICMC 2017 in Shanghai; he served as Concert Reviewer for ICMC 2018 in 2018; in 2019\, his pieces were selected and performed at ICMC 2019 and NYCEMF 2019 in New York\, alongside participation in another electronic music exchange at the University of Oregon and visits to CCRMA at Stanford University and UCLA; in 2020\, his works were selected and performed at the NYCEMF 2020 Virtual Online Festival; from 2021 to 2025\, his compositions were continuously selected and performed at ICMC\, NYCEMF\, and ICSC international conferences; additionally\, he has been a long-term reviewer for ICMC\, IEMC\, and NCDA. \n  \nJinwoong Kim: Transcendence: Performance without Presence\nTranscendence is an audio-visual performance interface that reimagines the relationship between performer interaction and algorithmic autonomy. The system utilizes a gamified “turret-defense” mechanic as a metaphor for stochastic sound generation. The user places “turrets” on a grid\, which autonomously track and engage moving targets based on proximity algorithms. This interaction serves as a direct translation of spatial logic into sound: distance defines intensity\, angle determines stereo panning\, and target properties dictate pitch and timbre\, creating a real-time sonification of digital conflict. \nA core innovation of Transcendence lies in its distinct “Performance Mode.” In traditional Human-Computer Interaction (HCI) for music\, the mouse cursor serves as a constant visual anchor\, reminding the user of the computer’s presence as a tool. In this work\, the cursor is deliberately rendered invisible during performance. While the performer retains control over the grid\, the visual representation of their “hand” is removed. \nThis design choice—”Performance without Presence”—dissolves the barrier between the creator and the creation. It shifts the cognitive load from operating a UI to immersing oneself in the audio-visual feedback loop\, allowing the performer to become a “ghost in the machine.” The result is a self-generating\, yet controllable\, polyphonic soundscape where the interface disappears\, leaving only the pure translation of logic into art. \nAbout the artist\nJinwoong Kim is a South Korean composer\, musician\, and media artist. He received his Ph.D. in Intermedia Arts from Tokyo University of the Arts\, where he studied under Professor Kiyoshi Furukawa. His creative practice spans a wide range of fields\, from contemporary computer music to\ninteractive media installations\, with a focus on integrating compositional methodologies with emerging technologies and cross-disciplinary thought. Drawing upon a diverse background in music\, visual art\, engineering\, and the natural sciences\, he has developed custom software systems–including BODIC and KCAC—to explore new forms of audiovisual expression.\nHe is currently a full-time faculty member in the Digital Media Design major within the Global Elite Division at Yonsei University\, where he teaches courses on creative coding\, computational design\, and media-based artistic practices. \n  \nTalia Amar: Triangulation\n“Triangulation” uses three different electronic music techniques that serve the same goal: to expand the possibilities of the acoustic piano. Each of these three techniques explore a different aspect of human-computer interaction. The pianist controls the electronics from an iPad\, choosing when to switch between the three patches\, and the pianist’s relationship with the computer changes in each patch. In the first patch the computer “listens” to the piano and reacts to it by performing the same notes with modifications such as quarter tone modulations\, reversing\, and stretching. The electronics in the second patch is pre-recorded and multiplies the piano\, with the effect that it sounds as if there were many pianos performing the same time. In the third patch the electronics records the piano performance and plays it back with different effects\, building up an aleatoric wall of pianos that is not possible to perform acoustically. \nAbout the artist\nDr. Talia Amar is the recipient of many international  awards including the Prime Minister prestigious award 2018\, The Acum prize for “best piece of the year” 2022\, The Acum award 2019\, the Rosenblum Prize for Promising Young Artist 2016 by the Tel Aviv Municipality\, the Klon Award for young composers granted by the Israeli Composers League. Recently she was the winner of The Next Voice – a call for scores from Israeli composers. Her piece\, For Orchestra I was unanimously selected from an incredible 152 submissions and will be performed by the Israel Philharmonic under the baton of Lahav Shani in March 2026 in Tel Aviv\, Haifa\, and Jerusalem. She was selected by the famous violinist Renaud Capucon to participate in the Festival New Horizons d’Aix en Provence 2022 where her piece\, commissioned especially for the festival\, will be performed. In 2022 her piece “Labyrinth” was commissioned and performed at Festival Présences by Radio France in Paris. She was selected to represent Israel in different festivals such as ISCM World New Music in Vancouver\, ECCO Festival in Brussels\, Asian Composers League Festival in Taiwan\, ICMC in Seoul\, and SMC in Austria.\nSince 2017\, Talia joined the composition faculty at the Jerusalem Academy of Music and Dance in Israel where she is also the Head of Technology and Innovation. She is also a council member of the Israeli Composers League and the performer of electronics music of Meitar Ensemble. \n  \nJingfan Guo: Whispers That Are Heard\nComposed for Arduino and Max/MSP\, this work employs a multi-sensor interface as its primary vehicle. It centers on two core sonic materials: whispering voices and African percussion. The former signifies the individual and the secret\, while the latter points to the collective and driving force. The work aims to superimpose these elements within a single sound field\, erasing the boundary between the individual and the collective. Amidst sonic entanglement and compression\, intimate whispers are deprived of their original space of existence\, alienated into mere components of the rhythm. This is\, at once\, an act of listening to secrets and a scrutiny of the clamor. \nAbout the artist\nJingfan Guo\, a native of Tai’an\, Shandong Province\, China\, is a member of the Electronic Music Society of the Chinese Musicians Association(EMAC) and a postgraduate student in Computer Composition at Wuhan Conservatory of Music\, class of 2024\, under the guidance of Professor Li Pengyun. His main research interests include electroacoustic music\, sensor interaction\, and kyma sound design. His major works include “Mute Water” (Electroacoustic Music)\, “Liminal Space” (Mixed Music)\, “Dissoving Voice” (for kyma and Computer)\,“Whispers That Are Heard”(for Arduino and Sensors). \n  \nShih-Lin Hung and Ju An Hsieh: Labyrinthe Souriant (Smiling Labyrinth)\n“Labyrinthe Souriant” (Smiling Labyrinth) is an interdisciplinary electroacoustic work exploring the fluid boundary between visual art and sonic translation. The piece is based on a hand-drawn graphic score created by a visual artist\, who utilizes traditional staff paper as a canvas for organic\, labyrinthine line-work and anthropomorphic silhouettes. The composition utilizes a performance-led approach to sound design. Using the graphic score as a primary visual stimulus\, the composer engaged in a one-take improvisation session via MIDI controllers mapped to a customized Ableton Live environment. This method ensures that the temporal flow of the music maintains a direct\, visceral connection to the visual trajectories of the score. The vocal samples were processed through real-time DSP chains\, where the nuances of the performance (velocity\, pressure\, and timing) were\ntranslated into dynamic spectral shifts and spatial movement\, reflecting the ‘Smiling Labyrinth’s’ intricate and unpredictable nature. \nAbout the artists\nShih-Lin Hung holds a B.A. from the National University of Tainan and an M.A. from National Yang Ming Chiao Tung University. Initially trained in Western classical composition\, his recent work explores electroacoustic aesthetics within the lineage of French musique concrète. His creative practice focuses on uncovering alternative sonic possibilities in daily sounds that are often ignored or taken for granted. \nJu-An Hsieh graduated from the Gerrit Rietveld Academie in Amsterdam\, the Netherlands\, and works primarily with images. In 2024\, their exhibition The Theatre explored the impact of colonial regimes on Taiwan’s ecology and the power relations between humans and nature. As their practice evolves\, embodied memories\, sensory experiences\, and dreams connected to nature have gradually become central themes in their work. \n  \nYunpeng Li: Echoes of the dial\nThis work uses the “outdated” communication technology signal—the telephone dial tone—as its core material. Through sampling and sound processing of the DTMF during telephone dialing\, it explores the dialectical relationship between auditory memory and the disappearance of matter within the context of technological accelerationism. In today’s world where information transmission approaches zero latency\, how can those echoes that once carried the desire for communication construct a new aesthetic dimension amidst the abandoned ruins? \nAbout the artist\nYunpeng Li\, Ph.D. Associate Professor\, Master’s Supervisor & Director\, Art & Science Teaching and Research Section\, Wuhan Conservatory of Music Main research and teaching focus: Electronic music composition. His works have been selected for the International Computer Music Conference (ICMC) multiple times. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/listening-room-1-1/
LOCATION:Hamburg University of Technology\, Building A (A 0.18)\, Am Schwarzenberg-Campus 1\, Hamburg\, 21073\, Germany
CATEGORIES:11-05,Listening Room,Music
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260511T133000
DTEND;TZID=Europe/Amsterdam:20260511T150000
DTSTAMP:20260505T121343
CREATED:20260421T084731Z
LAST-MODIFIED:20260504T082745Z
UID:10000077-1778506200-1778511600@icmc2026.ligeti-zentrum.de
SUMMARY:Lunch Concert 1A
DESCRIPTION:After the Opening Concert of ICMC HAMBURG 2026\, the regular music program begins today. This first Lunch Concert offers an insight into the current international computer music scene. What makes this event special is the personal presence of the artists: the composers are either on stage themselves or have brought the musicians they wrote for with them to Hamburg.\nIt is a program of short distances between idea and sound. The works demonstrate how diverse collaboration between humans and technology can be today—from the classical solo clarinet to interactive formats. \nThis Lunch Concert is open to the public. Those without a conference pass can purchase a ticket here. \n  \nProgram Overview\nTyche\nSever Tipei \nHOTPO\nMichael Edwards \nTessellae\nRodrigo Cadiz and Thierry Miroglio \nThe Center of the Universe\nSunhuimei Xia \nDream Voyager: A Pilgrim of the Infinite\nZoe Yi-Cheng Lin \nInterwoven Realms: The Threefold Domain of Consciousness\nQing Ye and Yuxue Zhou \n  \nAbout the pieces & artists\nSever Tipei: Tyche \nTyche for Bb clarinet and fixed media is a composition generated with original software for Computer-assisted (algorithmic) Composition and sound design developed by the composer and his collaborators.\nDivided into four main sections of 2-3-1-2 minutes\, the work utilizes stochastic distributions\, Markov chains\, sieves and Just Intonation as well as detailed control of spectra\, FM transients\, spatialization and reverberation. A basic framework of precise proportions and deterministic procedures are complemented by random details governed by Tyche\, the goddess of fortune\, chance\, providence and fate. \nAbout the artist\nA composer and a pianist\, Sever Tipei was born in Bucharest\, Romania\, and immigrated in the United States in 1972. He holds degrees in composition from the University of Michigan (DMA) and piano performance from Bucharest Conservatory (Diploma). Tipei taught at Chicago Musical College of Roosevelt University and\, between 1978 and 2021\, at the University of Illinois at Urbana-Champaign School of Music. After retirement Tipei continues to teach in the School of Information Sciences where he also directs the “James W. Beauchamp Computer Music Project”. He is also a National Center for Supercomputing Applications Faculty Affiliate. Between 1993 and 2003 Tipei was a Visiting Scientist at Argonne National Laboratory where he worked on the sonification of complex scientific data.\nMost of his compositions were produced with software he designed: MP1 – a computer-assisted composition program first used in 1973\, DIASS – for sound synthesis and M4CAVE – software for the visualization of music in an immersive virtual environment. More recently\, Tipei and his collaborators have developed DISSCO\, software that unifies computer-assisted (algorithmic) composition and (additive) sound synthesis into a seamless process. His compositions have been performed in the US\, Australia\, Brazil\, France\, Germany\, Italy\, Portugal\, Romania\, Spain\, United Kingdom and Taiwan. \n  \nMichael Edwards: HOTPO \nHinting at something a little more coarse\, the title HOTPO is in fact a completely innocent reference to the Collatz Conjecture. This mathematical proposition\, also known by other names\, refers to a succession of numbers called the hailstone sequence (or wondrous numbers)\, because their values usually ascend and descend like hailstones in a cloud.\nThough the mathematical proof of the conjecture is complex\, the proposition is very simple: Take any positive whole number; if it is even\, divide it by two; if it is odd\, multiply it by three and add one (hence the acronym Half Or Three Plus One: HOTPO); repeat the process with the result and you will find that no matter which number begins the process\, you will always\, given enough iterations\, reach one.\nThe algorithm is easy to programme and experiment with plus it produces rather nice images when given different starting numbers and plotted over various iterations. I used the algorithm in this piece to generate section lengths and repeated structures from nine basic rhythm sequences\, hence my sequence was 9 28 14 7 22 11 34 17 52 26 13 40 20 10 5 16 8 4 2 1. The piece alternates sections opposing mixed materials (odd section numbers) with obsessively repeated material (even). The numbers are also used for the generation of the sound files triggered during the performance. Despite the rather abstract nature of the generative procedure\, the results of the algorithms were developed intuitively and the piece as a whole arises out of and proceeds through a maelstrom of events fitting to the imagery of a hailstorm.\nHOTPO was commissioned by Henrique Portovedo for the World Saxophone Congress 2018 in Zagreb. That version included an ensemble. In 2020 I reworked the sound files to include MIDI data from the ensemble and made a solo + computer version. This was revised in 2024. \nAbout the artist\nMichael Edwards is a composer\, improvisor\, software developer\, and since 2017 Professor of Electronic Composition at ICEM\, Folkwang University of the Arts\, Essen\, Germany.\nHe is the programmer of the slippery chicken algorithmic composition package. His compositional interests lie mainly in the development of structures for hybrid electro-instrumental pieces through the integration of algorithmically produced scored materials with similarly generated computer-processed sound. He also improvises on laptop\, saxophones\, and MIDI wind controller\, performing for instance at the 2008 Montreaux Jazz Festival.\nMichael Edwards studied composition at Bristol University with Adrian Beaumont (BA\, MMus) and privately with Gwyn Pritchard. In 1991 he moved to the US for further studies in computer music with John Chowning at CCRMA\, Stanford University (MA\, Doctor of Musical Arts). Whilst studying there he also worked at IRCAM\, Paris\, with a residence grant at Cité des Arts.\nDuring 1996-7 he was a consultant software engineer in Silicon Valley. He developed a Document Recognition System used in several US hospitals. In 1997 he was appointed Lecturer in Music Theory at Stanford but later that year moved to Salzburg\, Austria. He was Guest Professor at the Universität Mozarteum until he left to teach at the University of Edinburgh in 2002. \n  \nRodrigo Cadiz and and Thierry Miroglio: Tessellae \nTessellae for percussion and live electronics unfolds as a mosaic of small rhythmic tiles laid in time by a single performer. The percussion writing is built on Euclidean rhythmic principles\, patterns that distribute events as evenly as possible\, expanded through asymmetric tuplets (notably groups of three and five)\, repetitions\, and carefully placed silences that create a strong sense of anticipation from phrase to phrase. Only one or two instrumental lines sound at a time\, allowing the listener to perceive each gesture as a discrete tessera within a larger rhythmic surface. The live electronics\, built on RAVE\, a real-time variational autoencoder developed at IRCAM and trained on a corpus of percussion sounds\, listen to the performer and respond by reshaping timbre and resonance in the moment\, extending and refracting the acoustic material without fixing it in advance. The result is a dialogue between strict rhythmic architecture and fluid sonic transformation\, where expectation\, delay\, and renewal are central expressive forces. Tessellae was composed for Thierry Miroglio. \nAbout the artists\nRodrigo F. Cádiz is a composer\, researcher and engineer. He studied composition and electrical engineering at the Pontificia Universidad Católica de Chile (UC) in Santiago and he obtained his Ph.D. in Music Technology from Northwestern University. His compositions\, consisting of approximately 70 works\, have been presented at several venues and festivals around the world. His catalogue considers works for solo instruments\, chamber music\, symphonic and robot orchestras\, visual music\, computers\, and new interfaces for musical expression. He has received several composition prizes and artistic grants both in Chile and the US. He has authored around 70 scientific publications in peer reviewed journals and international conferences. His areas of expertise include sonification\, sound synthesis\, audio digital processing\, computer music\, composition\, new interfaces for musical expression and the musical applications of complex systems. In 2018\, Rodrigo was a composer in residence with the Stanford Laptop orchestra (SLOrk) at the Center for Computer-based Research in Music and Acoustics (CCRMA)\, and a Tinker Visiting Professor at Stanford University. In 2019\, he received the prize of Excellence in Artistic Creation from UC\, given for outstanding achievements in the arts. In 2024\, he was a visiting researcher at the Orpheus Instituut in Belgium. He is currently full professor at the Music Institute and Electrical Engineering Department of UC. \nSince several years Thierry Miroglio is realizing a brilliant solo career where he is invited to give in more than forty countries recitals and solo concerts in numerous venues and prestigious Festivals such as Salzburg\, Philharmonie Berlin\, New York\, Wien Konzerthaus\, Boston\, Besançon\, San Francisco\, Munich\, Schleswig Holstein\, Madrid\, Rom\, Tokyo\, Milan\, Zagreb\, Nice\, Köln\, Paris\, Hamburg\, Athen\, Sao Paulo\, Lisbon\, Monte Carlo Printemps des Arts\, Hong Kong\, Buenos Aires Colon Theater\, Genève\, Brugge Concertgebouw\, Bucarest Atheneum\, Peking\, Amsterdam\, Linz Brucknerhaus\, Rio\, Darmstadt\, Helsinki\, Johannesburg\, Mexico\, Seoul\, Shanghai\, Moscow\, Biennal of Venice … \n  \nSunhuimei Xia: The Center of the Universe\nThe Center of the Universe\, an algorithmic music work integrated with interactive technology\, draws inspiration from the artist’s immersive impressions of New York City gleaned through multiple on-site visits. Standing atop the Empire State Building\, the artist perceived the metropolis as a dynamic global nexus where people of diverse cultural and ethnic backgrounds converge\, weaving a vibrant\, multifaceted urban tapestry that resonates with the energy of an interconnected world. Taking the phrase “The Center of the Universe” as its foundational sonic material\, the work delivers innovation through experimental multilingual vocal manipulation—deploying the core line in English\, Spanish\, French\, German\, Italian\, Russian\, Chinese\, Japanese\, Korean\, and Thai—with all vocal textures sourced from sampled macOS AI voices\, blending computational sound synthesis with linguistic diversity to push the conventional boundaries of vocal-based algorithmic composition. It achieves nuanced translation by converting the artist’s subjective perceptual experience of the city into an audible\, interactive sonic landscape\, while translating the abstract idea of cross-cultural convergence into tangible musical logic via the layered interplay of multilingual vocal samples. Further embodying participation\, the piece adopts wireless Nintendo Wiimote Controllers as its interactive performance interface\, enabling the performer to stand at the “center” of the stage and manipulate the musical structure in real time; this design redefines the dynamic between creator\, performer\, and audience\, turning the performance into a collaborative process where physical movements directly shape sonic evolution. \nAbout the artist\nSunhuimei Xia\, Associate Professor of Art and Technology at Wuhan Conservatory of Music’s Composition Department\, Dr. Xia holds a Master’s from Johns Hopkins University and a Doctorate from the University of Oregon (U.S.). Mentored by renowned composers Jian Feng\, Jian Liu\, Geoffrey Wright\, and Jeffrey Stolet.\nAs central and western China’s first DMA in data-driven musical instrument composition and performance\, this accomplished composer focuses on computer music creation and music-technology integration\, with core interests in interactive data-driven instruments\, algorithmic composition\, and data sonification.\nHonored as a Music Entrepreneurship and Innovation Talent by the Ministry of Culture and an Outstanding Young and Middle-Aged Literary and Art Talent by Hubei Federation of Literary and Art Circles\, her works won the Hubei Golden Bianzhong Music Award\, with over 10 pieces showcased at top global events including ICMC\, ISMIR\, NIME\, SMC\, SEAMUS\, NYCEMF\, EMM\, IRCAM\, WOCMAT and Musicacoustica-Beijing.\nShe released China’s first DVD album of data-driven instrument works\, published by Shanghai Music Publishing House and Shanghai Literature & Art Audio-Video Electronic Publishing House. She guided students to secure 20+ domestic and international awards\, leads provincial projects and participates in the Ministry of Education’s Humanities and Social Sciences Youth Fund Project\, driving music-technology innovation. \n\n\nZoe Yi-Cheng Lin: Dream Voyager: A Pilgrim of the Infinite\nDream Voyager: A Pilgrim of the Infinite is an immersive musical work accompanied by a visual component that serves as a poetic guide rather than a narrative driver. The performance begins with a flutist and an actor on stage\, situated in waking reality. As the music unfolds\, the visual imagery gradually transitions into the realm of dreams\, leading the audience into an inner journey of consciousness. The work portrays a journey of the soul through a lucid dream—a state in which consciousness remains fully awake within the dream\, perceiving reality with radiant clarity\, even to the point of leaving the body. The music begins at the threshold of sleep\, gradually descending into deeper layers of awareness beneath a starlit sky. The soul then rises swiftly beyond the firmament\, gazing down upon the dreamlike Earth\, awed by its vivid presence and driven by a longing to understand its essence. A distant bell resounds\, symbolizing ancient wisdom dwelling within the heart and calling the voyager toward cosmic truth. Guided by textures of light and ice\, the pilgrim descends to touch the mud and stone of forgotten lands\, entering memories of an ancient civilization—serene yet mysterious. It soon reveals itself beneath the ocean’s depths\, magnificent but ephemeral\, its rise and fall exposed as a dream of the cosmic mind. As time and space dissolve\, the pilgrim senses the universal breath—the cosmic inhale and exhale uniting all beings in a single living rhythm. When the celestial bell sounds again\, layers of golden light\, like lotus petals\, guide the soul back to waking reality. What returns is not merely memory\, but awakened insight—an expanded vision that perceives the world through a cosmic lens. As the dream dissolves\, the figures on stage awaken and take their final bow. The work thus gestures toward a “dream within a dream\,” resonating with Buddhist perspectives in which the boundaries between reality and illusion are ultimately indistinguishable. In the context of contemporary technological society\, this question becomes ever more urgent: what is real\, and what is virtual or dreamlike? The distinction grows increasingly ambiguous. Employing Ambisonic spatial techniques\, the electronics articulate vertical and immersive motion: sound ascends\, drifts\, expands\, and finally resurfaces\, mirroring the soul’s movement through space and awareness. While the work may evoke a cinematic sense of narrative\, it is entirely independent of visual imagery. All spatial perception\, emotional meaning\, and narrative continuity arise solely through sound\, demanding a high degree of sonic precision and expressive depth\, allowing the music itself to become a complete sensory and contemplative journey. Furthermore\, the dancer and background imagery in the dream sequences are driven by Music Information Retrieval (MIR) features extracted from the music in real time. Implemented in TouchDesigner\, the visual system functions as a responsive virtual stage that is generated through the act of listening. Special thanks to the Taiwanese flutist\, Cheng-Yu Wu\, for the flute recording. \nAbout the artist\nZoe (Yi-Cheng) Lin is a composer and software engineer specializing in digital music. Her electronic music has been exhibited in Europe\, Asia\, North and South America\, and Australia\, across 21 countries and 50 major international festivals. She holds a doctorate in composition from the University of Wisconsin-Madison and was the Chief Music Officer at an AI music company\, leading AI music generation R&D. Currently\, she is a full-time composer and adjunct assistant professor at NTNU. Zoe specializes in synesthetic and 3D immersive electronic music. Her work has been/will be showcased worldwide at NYCEMF 2026\, ICMC 2025\, NYCEMF 2025\, JINLAC2025\, SEAMUS 2025\, REF 2024\, Ars Electronica 2024\, IRCAM Forum 2024\, NYCEMF 2024\, ICMC 2024\, and more. Her music is featured on albums from EMPIRICA RECORD\, SiMN 2023\, and MUSLAB 2023. She was selected for the Anthropocene Project 2024 and EMPIRICA RECORD 2024\, championing experimental and electronic music. \n  \n\nQing Ye and Yuxue Zhou: Interwoven Realms: The Threefold Domain of Consciousness\n“Overlap: The Three Realms of Consciousness” is a multimedia musical work that explores the deep structures of the human psyche. The sonic dimension includes ASMR trigger sounds—such as wood\, metal\, and human oral noises—woven into an arch-shaped structure (ABCB’A’) that connects Freud’s three dimensions of the preconscious\, the unconscious\, and consciousness. Through TouchDesigner\, sound and visuals jointly construct a psychological landscape\, revealing the interlacing and transformation of multidimensional consciousness within dreams. The audience is drawn into a psychological space that transcends reality\, experiencing the flow and reflection of consciousness through the fusion of sound and form. \nAbout the artists\nQing Ye is a composer and doctoral student in Music Technology at Nanjing University of the Arts\, supervised by Professor Xuan Wang. She is a member of the Electronic Music Society of the Chinese Musicians’ Association and holds a Level-3 composer certification. Her works have been presented at international composition competitions including the Hangzhou International Electronic Music Festival and the Sibelius and Vivaldi International Music Competitions. Her practice focuses on computer-assisted composition and audiovisual creation. \nYuxue Zhou is a Ph.D. in Musicology at the Communication University of China under the supervision of Professor Xuan Wang. Her creative work focuses on electronic and multimedia music. She has received awards at major composition competitions including MUSICACOUSTICA-BEIJING\, the Hangzhou International Electronic Music Festival\, and the Vivaldi International Composition Competition. Her works have been presented in national arts projects and international multimedia music events. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/concert-1a/
LOCATION:Hamburg University of Technology\, Building I\, Audimax 2\, Denickestraße 22\, Hamburg\, 21073\, Germany
CATEGORIES:11-05,Concert,Music
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260511T180000
DTEND;TZID=Europe/Amsterdam:20260511T190000
DTSTAMP:20260505T121343
CREATED:20260415T101813Z
LAST-MODIFIED:20260417T114349Z
UID:10000114-1778522400-1778526000@icmc2026.ligeti-zentrum.de
SUMMARY:[Off-ICMC] Concert: Nenad Nikolić – Accordeon meets Techno
DESCRIPTION:Photo: Boris Las Opolski\n  \nNenad Nikolić was born in Serbia and has always been fascinated by his father and grandfather’s accordion playing. But mechanical sounds are from the past. Nenad plays without backing tracks\, performing every single tone live—from “tango to techno.” Don’t miss this chance to see him push the boundaries of his instrument with his electronic accordion.  \nNo registration required  \n  \nThe Off-ICMC\nMusic is what brings us together\, even when everything else pulls us apart.\nMusic everywhere—it is part of our everyday lives. And yet\, we’re hearing it performed live on analog instruments less and less. Instead\, it often reaches us through speakers or headphones\, as files\, from the cloud. What does music mean to you? What does it sound like today? Where does it begin—and where does it end?\nThe ligeti center invites you to listen more closely and discover new sounds—to explore\, experiment\, and play. This year\, ICMC HAMBURG 2026 revives an old tradition: the Off-ICMC\, a free and accompanying festival curated for the general public and anyone curious about computer music. \nAll Off-ICMC events are free of charge.  \n\n  \n \n 
URL:http://icmc2026.ligeti-zentrum.de/event/off-icmc-concert-nenad-nikolic-accordeon-meets-techno/
LOCATION:Harburg Info\, Hölertwiete 6\, Hamburg\, 21073\, Germany
CATEGORIES:11-05,Music,Off-ICMC
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260511T190000
DTEND;TZID=Europe/Amsterdam:20260511T210000
DTSTAMP:20260505T121343
CREATED:20260421T085527Z
LAST-MODIFIED:20260504T083043Z
UID:10000079-1778526000-1778533200@icmc2026.ligeti-zentrum.de
SUMMARY:Evening Concert 1B
DESCRIPTION:This evening concert marks a special collaboration between the international ICMC community and Hamburg’s music scene. At its center is Ensemble 404 from the Hamburg University of Music and Drama (HfMT). For this occasion\, a video wall will be specially installed in the Friedrich-Ebert-Halle to highlight the synergy between sound and image.\nThe program ranges from intimate solo pieces with computer support to complex ensemble compositions and large-scale video works. \nThis Evening Concert is open to the public. Those without a conference pass can purchase a ticket here. \n  \nProgram Overview\nFantasy for Viola and Computer\nRichard Dudas \nNeuro Translation Engine\nVincenzo Russo \nClimate II for piano and computer \nRikako Kabashima \nWind Blown Rain\nMara Helmuth\, Esther Lamneck and Alfonso Belfiore \nDelicate Anticipation\nKotoka Suzuki and Michael Murphy \nAir-Carving Bamboo\nYu Chung Tseng \nscanning\nKeisuke Yagisawa \n  \nAbout the pieces & artists\nRichard Dudas: Fantasy for Viola and Computer\nThis work for solo viola and real-time audio processing in Max is a composed extension of some prior improvisational works using Max. It was written in part as an exploration of Bohlen-Pierce tuning (in the electronics)\, which divides the perfect twelfth into thirteen unequal justly-tuned steps. The viola part is pitted against this\, performing in standard twelve-unequal-steps-to-the-octave tuning\, juxtaposing and combining several different musical fragments\, each with its own character and mood. All sounds in the electronics are live: they are derived from the sounds of the on-stage violist. Max audio processing includes formant filtering to provide a vocal quality to the transposed and resonated viola sounds. \nAbout the artist\nRichard Dudas holds degrees in Music Composition from The Peabody Conservatory of Music of the Johns Hopkins University\, and from The University of California\, Berkeley. He additionally studied at the Franz Liszt Academy of Music in Budapest\, Hungary and the National Regional Conservatory of Nice\, France. In addition to composing music for acoustic instruments\, he has been actively involved with music technology since the late 1980s. As a computer musician\, he has taught courses at IRCAM\, and developed musical tools for Cycling ’74. Since 2007 he has been teaching music composition and computer music at Hanyang University in Seoul\, Korea. \n  \nVincenzo Russo: Neuro Translation Engine\nIn the future\, global societies remain marked by a multitude of languages\, dialects\, idiolects\, and diverse phonetic and cultural systems. Despite advances in AI-driven translation\, fundamental limits persist in the loss of emotional nuance\, imprecise interpretations\, and gaps between what is said and what is perceived. A team of computational linguists and neuroscientists develops an advanced artificial entity: the Neuro Translation Engine (NTE)\, capable of surpassing traditional textual or acoustic translation. The NTE does not translate words\, but the neural intentions behind language. It stimulates a specific area of the human brain\, the resonance cortex\, designed to receive universal neurosensory patterns. The result is a world where everyone can speak their native language while perfectly understanding others. Linguistic diversity is not diminished but enriched through mutual comprehension. The composition for ensemble and electronics illustrates how the NTE processes\, transforms\, and reconstructs communicative material. Through sound transformation techniques\, the acoustic material is dematerialized\, representing the machine’s “internal work”: the conversion of complex signals into a unified code. The final sound is entirely electronic\, devoid of recognizable references to the original ensemble. It forms a new language\, perceived as a pattern directly interpreted by the brain. \nAbout the artist\nVincenzo Russo (1995) holds a bachelor’s degree in Business Administration from the University of Naples “Parthenope.” He began his musical studies in Composition for Visual Media at the San Pietro a Majella Conservatory in Naples under the guidance of the late Maestro Lucio Lo Gatto. In July 2025\, he completed the second-level degree (Master’s degree) in Composition. Alongside his academic work\, he is active as a composer\, arranger\, and music producer\, working from his own recording studio. \n  \nRikako Kabashima: Climate II for piano and computer \nThis work was composed based on a variety of ideas inspired by climate change. In recent years\, translating insights from the natural world into my own compositions has become an important experiment in my creative practice.\nIn particular\, this piece draws inspiration from the rapid climate fluctuations caused by global warming\, a pressing issue worldwide. Each measure in the work is specified in seconds rather than traditional beats\, and there is no fixed meter. Within each measure\, rhythms are performed improvisationally according to the given duration.\nThis approach allows for different rhythms and nuances to emerge in every performance\, reflecting the ever-changing nature of the climate itself. \nAbout the artist\nRikako Kabashima was born in Kagoshima\, Japan\, in 1996. She began studying piano at the age of three and later pursued composition at Senzoku Gakuen College of Music in Tokyo. After completing her undergraduate studies in 2021\, she entered the master’s program in composition at Toho College of Music\, where she studied with Kazuro Mise and Hitomi Kaneko\, and explored computer music under the guidance of Takayuki Rai. She earned her master’s degree in March 2025.\nHer works have been selected at international festivals including the New York City Electroacoustic Music Festival (NYCEMF) in 2023\, the International Computer Music Conference (ICMC) in 2023\, 2024\, and 2025. \n  \nMara Helmuth\, Esther Lamneck and Alfonso Belfiore: Wind Blown Rain\nWind Blown Rain was inspired by natural processes and forces involving water. Water metamorphoses between many opposing states: from a gentle drizzle to a stormy downpour\, from a tiny droplet to a crashing ocean. Life on earth is dependent on water\, and also at its mercy. This piece focuses mainly on the transformed sounds of rain\, and its reflections in the tárogató sound. Samples were recorded in Venice and Ascea\, Italy. The music was composed in Italy in the summer of 2025 at Wassard Elea Artist’s residency in Ascea by a computer music composer and a performer/real time composer. While most of our previous collaborations have relied solely on the sound of the performer’s instrument for the computer part\, in this piece the instrumentalist interacts primarily with music created from natural recordings and their processed transformations. A third artist created the video part in response to the music from his own water-related video recordings. The video component of Wind Blown Rain is a visual meditation on the natural landscape\, filtered through the inner rhythm of rainfall. Created with images generated and modified using artificial intelligence\, the editing alternates slow-motion sequences\, crossfades\, and subtle variations to evoke a dilated sense of time. The environment\, immersed in rain\, transforms gradually\, suggesting a fragile balance between presence and dissolution. The visual work accompanies the music as a mental landscape—fluid and contemplative. \nAbout the artists\nMara Helmuth (b. 1957)\, internationally known computer music composer/researcher\, received a Guggenheim Fellowship in 2025. Her research explores sonification\, granular synthesis\, wireless sensor networks\, Internet2\, and RTcmix. She is Professor at College-Conservatory of Music\, University of Cincinnati\, where she received the George Rieveschl Award for Scholarly / Creative Works at in 2023. She served on the International Computer Music Association board of directors and as President. D.M.A.: Columbia Univ.\, earlier degrees: Univ. Ill. U-C. \nEsther Lamneck\, Clarinet and Tarogato\nThe New York Times calls Esther Lamneck\, “an astonishing virtuoso”She has appeared as a soloist with major orchestras\, with renowned chamber music artists and an international roster of musicians from the new music improvisation scene. http://www.estherlamneck.com/ \nAlfonso Belfiore is a composer and visual artist whose work explores the relationships between sound\, image\, movement\, and perception. Former professor of electronic music at the Conservatories of Florence and Padua\, he has collaborated with international institutions\, creating performances\, sound installations\, and multidisciplinary projects that merge musical innovation with digital art. His recent work investigates memory\, dreamlike space\, and the fragile line between reality and imagination. \n  \nKotoka Suzuki and Michael Murphy: Delicate Anticipation\nThis work is written as part of the series “In Praise of Shadows\,” inspired by Junichiro Tanizaki’s essay of the same title\, written at the birth of the modern era in imperial Japan. The essay describes how shadows and negative space are integral to traditional Japanese aesthetics in music\, architecture\, and food\, extending even to the design of everyday objects. As Tanizaki explains\, “We find beauty not in the thing itself but in the patterns of shadows\, the light and the darkness\, that one thing against another creates… Were it not for shadows\, there would be no beauty.” \nThe focus of the first of its sequence\, “In Praise of Shadows” for three paper players and electronics is placed on the collective loss of the tangible in our modern life\, analogues to how the excessive illumination of Edison’s modern light affect Japanese aesthetics and culture. Following this work\, “Orison” is composed for three music box players and electronics. The work is further inspired by the voices of children of war\, both from past and present\, speaking and singing about hope\, peace as well as sorrows arising from their personal experiences. These melodies\, presented as empty spaces on the music score\, reveal as they are fed through the music boxes. \nIn the third part of the sequence\, “Delicate Anticipation\,” written for a solo percussionist\, electronics\, and lights\, shadow is the central focus\, honouring the “patterns of shadows\, the light and the darkness\, that one thing against another creates”. Positioned behind the scrim\, the percussionist is only visible as a shadow while performing with lights and instruments primarily of metal and skin\, manipulating patterns of carefully choreographed shadows. The title derives from the English translation of the essay\, which describes the sensation of gazing at the silent liquid in the dark depths of a Japanese lacquerware bowl. As Tanizaki writes\, “What lies within the darkness one cannot distinguish…. …the fragrance carried upon the vapor brings a delicate anticipation.” \nAbout the artists\nKotoka Suzuki’s work engages deeply with the visual\, conceiving of sound as a physical form to be manipulated through the sculptural practice of composition. Artists such as the Arditti Quartet\, Eighth Blackbird\, Nouvel Ensemble Moderne\, and Mendelssohn Chamber Orchestra (Leipzig) have featured her work internationally through numerous venues and broadcasts\, including BBC Radio 3\, Schweizer Radio\, Lucerne Festival\, Heroin of Sound Festival\, Ultraschall\, and ZKM Media Museum. Suzuki is currently an Associate Professor at the University of Toronto. \nMichael Murphy is a Chinese-Canadian percussionist praised by The New York Times\, Opera Canada\, and The Herald. He has toured across North America\, Europe\, Scandinavia\, and Asia\, performing with ensembles including the Toronto Symphony Orchestra\, the National Ballet of Canada Orchestra\, and Philharmonisches Orchester Freiburg. A leading advocate for new music\, he has premiered concertos by Alice Ping Yee Ho\, Liam Ritz\, and Bob Becker and champions contemporary repertoire internationally. \n  \nYu Chung Tseng: Air-Carving Bamboo \n“Air-Carving Bamboo Music” premiered at the 2025 C-LAB Sound Arts Festival_DIVERSONICS . This work is an Acousmatic / electroacoustic music. The material comes from the composer’s field recordings of bamboo colliding on the shores of Emei Lake in his hometown of Hsinchu County in Taiwan. Through editing and transformation using DAW software\, and incorporating feedback material from AI Somax 2 on some of the bamboo collision rhythms\, the work was finally organized into an electroacoustic music piece.\nIn terms of performance style\, the composer wanted to differentiate themselves from traditional purely played electroacoustic music\, creating a synesthetic aesthetic experience for both the ears and eyes\, and letting electroacoustic music visible .\nThe composer invited percussionist Hsieh Yi-chieh to wave glow sticks in the dark\, as if drawing out or sculpting the electroacoustic music in air\, a technique akin to “grabbing music from a distance.” This presentation method\, besides giving electroacoustic music a performative quality\, greatly enhances the visual appeal\, auditory appeal\, and sonic dramatic tension of the performance. Postscript: Having composed electroacoustic music for more than 2 decades\, the composer occasionally wants to dabble in this area\, slightly transcending the aesthetic/philosophical view of “sound-only/purely auditory” in Acousmatic / electroacoustic music listening. \nAbout the artist\nYu-Chung Tseng\, receiving his DMA from UNT in Texas\, is a professor of electronic music composition and serves as the director of multi-channel Sound Lab at Institute of Music at National Yang Ming Chiao Tung University(NYCU) in Taiwan. \nHis music\, written for both acoustic and electronic media\, has been recognized with selection/awards from Pierre Schaeffer International Computer Music Competition (1st Prize/2003)\, Città di Udine International Contemporary Music Competition\, Musica Nova (First Prize/2010)\, Metamorphoses\, International Computer Music Conference(ICMC\, Best Music Award/2011/2015/2022)\,Taukay Edizioni Musicali call for Acousmatic Music(Winner/2019)\, and RMN Classical Electroacoustic call for work(Winner/2023)\,Polish International Electroacoustic Music Competition (Finalist/2023)\, KLANG International Acousmatic Composition Competition(Second Prize/2023) \, and Musica Nova (First Prize/2010). \n  \nKeisuke Yagisawa: scanning \nThis video work explores the human perception of visual images. In response to art critic Clement Greenberg’s thesis about the immediacy and autonomy of painting\, philosopher Willem Flusser argues that a “scanning” process occurs when perceiving a two-dimensional work of art. This video work takes this thesis as its theme\, expressing the instantaneous phenomenon of a light bulb breaking as visual and acoustic variations. MAX and Processing were used for the video and audio processing. \nAbout the artist\nKeisuke YAGISAWA is an audiovisual artist. He studied electronic music\, video and visual art in Royal Academy of Art in the Hague(Netherlands)\, Tokyo University of the Arts(Japan) and had doctoral degree(DMA) in Kunitachi College of Music in Japan. His works have been presented at international conferences and festivals including ICMC\, NYCEMF\, SICEMF etc. Now he is working at Tamagawa University as an assistant professor for electronic music and technology art. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/concert-1b/
LOCATION:Friedrich-Ebert-Halle\, Alter Postweg 34\, Hamburg\, 21075\, Germany
CATEGORIES:11-05,Concert,Music
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260511T213000
DTEND;TZID=Europe/Amsterdam:20260511T233000
DTSTAMP:20260505T121343
CREATED:20260421T145800Z
LAST-MODIFIED:20260423T185733Z
UID:10000067-1778535000-1778542200@icmc2026.ligeti-zentrum.de
SUMMARY:Club Concert 1C
DESCRIPTION:Immerse yourself in a 20.8-channel sound world: in the Production Lab of the Ligeti Center\, neural synthesis\, artificial intelligence\, and interactive visuals merge into an immersive live experience. International artists present innovative prototypes—from AI-augmented string instruments to dynamic graphic scores. \n  \nProgram Overview\nZwischenheit \nRiccardo Ancona \nKnitting\nBrian Lindgren \nSonic Memories: A Live Coding Performance with Machine-Learned Sound Fragments\nRiccardo Mazza \nGradient Noise: Animated Scores with Corresponding Data Streams\nJohn C.S. Keston \nFluid Ontologies\nNicola Leonard Hein and Viola Yip \nOn The Edge\nKasey Pocius \nScarittera – Subterranean Eruptions of Sonic Memory\nDanilo Randazzo \n\n\n  \nAbout the pieces & artists\nRiccardo Ancona: Zwischenzeit \n\nContemporary neural audio research frames “music understanding” as a computational task. What does it mean for a machine to listen and understand a sonic context? Zwischenheit (2025) is an audiovisual performance that aims at finding a speculative\, empirical\, situated answer. The projection shows the performer having an improvisational dialogue with an algorithmic system composed of an audio captioner and a local language model. While the sound piece unfolds\, it reveals a complex scenario made of overlapping soundscapes. The language model is prompted to interpret the music as it flows\, trying to provide a nuanced understanding of the sonic situation. The human performer\, on the other hand\, is both inquisitive and reflective: at which threshold does the language model begin to appear as an agent of mystification? What does agency without consciousness reveal about listening? The outcomes of the dialogue change at every performance\, as there is a certain degree of stochasticity in the model’s replies\, but they always point at critical aspects of sonic hermeneutics and computational cognition. Embodiment\, contingency\, and situatedness emerge as essential characteristics of human listening that contemporary neural networks cannot embed. Zwischenheit is thus an attempt at investigating the performative possibilities that emerge at the intersection between post-acousmatic music\, music information retrieval\, and generative AI through an analytical self-reflection. \nAbout the artist\nRiccardo Ancona is a sound artist and PhD researcher in musicology of algorithmic music at the University of Bologna. He studied at CREA (Frosinone) and at the Institute of Sonology (Den Haag)\, where he specialized in algorithmic improvisation. His research focuses on computational aesthetics\, archival study of computer music\, and the sociology of neural audio technologies. He also curates Miniature Recs. \n  \nBrian Lindgren: Knitting \nKnitting is a new work for the EV\, an augmented bowed string instrument that integrates IRCAM’s RAVE (Realtime Audio Variational autoEncoder) neural synthesis model. The composition explores how machine learning can extend the timbral vocabulary of a traditional gestural practice—not by imposing external sonic material\, but by folding the instrument’s own acoustic identity back through a neural lens. \nThe EV combines a 3D-printed body with four infrared optical pickups whose signals are processed by a Bela board and transmitted to a laptop running Pure Data. Each string controls an independent synthesis engine comprising convolution\, physical modeling\, granular processing\, reverb\, and ambisonic spatialization. The recent addition of RAVE introduces a self-referential pathway: the model was trained on four hours of the EV’s own recordings\, creating a system that listens to itself through learned representations of its sonic history. \nCentral to this integration is a control strategy that maps performance descriptors—fundamental frequency\, amplitude\, and spectral centroid—to specific dimensions of the model’s eight-dimensional latent space. By constraining each modulation source to a single latent dimension\, the relationship between gesture and neural response becomes legible: a shift in bow pressure or position translates into a navigable timbral trajectory rather than an opaque transformation. This approach distinguishes the EV from other RAVE-integrated instruments\, which often emphasize loop-based or tabletop interfaces rather than continuous bowed-string control. \nKnitting treats this latent space as a landscape of sonic possibility\, each dimension a potential resonance between physical gesture and synthesized response. The compositional process is less one of arranging fixed materials than of cultivating emergent textures—drawing out sonic filaments\, crossing and interlacing them\, balancing tensions across the tapestry. The neural model functions as a meta-resonator: a parallel pathway that refracts the instrument’s timbral identity through an alternate causal route\, revealing aspects of its sound that remain latent in conventional electroacoustic processing. \nThe work demonstrates how neural synthesis can be embedded within a hybrid instrument ecology\, extending expression beyond pitch and amplitude to make performance descriptors direct agents of timbral transformation. By grounding latent navigation in the acoustic features of bowed-string technique\, Knitting positions machine learning not as a replacement for embodied practice but as an expansion of its expressive range. \nAbout the artist\nBrian Lindgren (1983) is a composer\, researcher\, violist\, and instrument builder whose work explores the convergence of acoustic performance and digital synthesis through the EV\, a hybrid string instrument integrating lutherie and embedded computing. \nHis compositions and research have been featured at the International Computer Music Conference (ICMC)\, New Interfaces for Musical Expression (NIME) conference\, Conference on Neural Information Processing Systems (NeurIPS)\, Society for Electro-Acoustic Music in the United States (SEAMUS)\, IRCAM Forum\, and International Conference on Auditory Display (ICAD)\, as well as published in Organised Sound. His work has been performed by ensembles including HYPERCUBE\, LINÜ\, Popebama\, and Tokyo Gen’on Project. \nThe EV was a finalist in the 2026 Guthman Musical Instrument Competition and used to compose ‘two tales from the shadows of the grid’ which won first place at the IEEE Big Data 2025 3rd Workshop on AI Music Generation Competition. \nLindgren holds an MFA in Sonic Arts from Brooklyn College (Subotnick\, Geers\, Gimbrone)\, a BA from the Eastman School of Music (Graham)\, and is pursuing a PhD at the University of Virginia (Burtner). \n  \nRiccardo Mazza: Sonic Memories: A Live Coding Performance with Machine-Learned Sound Fragments \nDrawing from Henri Bergson’s concept of *durée* and Deleuze’s rhizomatic models\, “Sonic Memories” reimagines memory not as a linear chronological archive\, but as a stratified field of coexisting planes. In this live coding performance\, autobiographical sound fragments—from mechanical gears to lagoon soundscapes and fragile voices—are liberated from their timeline and reorganized by an autoencoder into a non-hierarchical\, navigable map. \nThe performance begins with the simple act of loading a personal audio file—a field recording from a journey\, a voice memo\, a musical fragment—into a computational system that immediately begins to analyze and reorganize these sonic memories according to its own logic. \nOn stage\, the audience sees everything: the code acting in real-time\, a visual map where memories become points in space\, oscilloscopes showing the transformation of sound waves. This transparency is essential—there is no mystification of the technological process\, but rather an invitation to witness the negotiation between human remembering and algorithmic interpretation. \nThe performer navigates this latent space using SuperCollider and FluCoMa\, triggering both the original “concrete” traces and their AI-generated “distorted echoes.” The algorithm serves not as an autonomous agent\, but as a refracting lens\, forcing the performer to negotiate between faithful recall and neural hallucination. The result is a fragile dialogue between the fixity of the past and the malleability of the present\, exploring how computational tools can actualize memory as a living\, reconstructive act. \nThe work asks: How do we perform memory in an age of machine learning? Not by having machines remember for us\, but by creating dialogues with computational systems that reorganize our experiences according to their own logic\, forcing us to rediscover our own histories through unfamiliar maps. \nAbout the artist\nRiccardo Mazza (Turin 1963). Composer\, multimedia artist\, and faculty member at the Scuola di Alto Perfezionamento Musicale di Saluzzo. He collaborates with SMET (Electronic Music School) at the Conservatorio di Torino and the Conservatorio Ghedini in Cuneo\, and is internationally recognized for his research in psychoacoustics and spatial audio.\nIn 1997 he began a collaboration with Franco Battiato\, focusing on new technologies for sound. Between 1999–2000 he created the Renaissance SFX library\, the first Dolby Surround encoded spatial effects and field recording collection for cinema and television. Later developed SoundBuilder\, software for object-based surround design presented at AES 2003 in San Francisco\, which anticipated Dolby Atmos.\nHe founded Interactive Sound in 2001\, a research studio dedicated to multimedia exhibitions and immersive installations\, and in 2003 patented a psychoacoustic model of “sleep waves.” With Laura Pol\, he co-founded Project-TO (2015)\, an electronic and visual project that has released four albums and appeared at major festivals including TFF\, TJF\, Robot\, Share Festival.\nSince 2018\, he directs Experimental Studios in Turin\, one of Europe’s leading Dolby Atmos recording facilities. His current project Sonic Earth explores environmental sonification and algorithmic composition\, and has been presented internationally at ICMC 2025 in Boston\, FARM/SPLASH 2026 in Singapore\, SBCM 2025 (Brazil)\, IEEE 2025 (L’Aquila). \n  \nJohn C.S. Keston Gradient Noise: Animated Scores with Corresponding Data Streams\nSince 2019 I have been composing animated graphic scores for ensembles and soloists. These generative works are projected for both the performers and audience to experience. Custom software runs during the performance to create the computer graphics and geometric forms. Rules are established on how the forms are read\, but improvisation and the emotional response of the performer still play an integral part in each piece. Fixed media of this work does not suffice because it lacks the realtime\, generative\, and participatory aspects that create surprise and challenges for the performers. \nMore recently I began composing scores that not only generate animated visuals\, but also stream corresponding MIDI data that impacts the timbre and signal processing of the electronic instruments used by the performers. The instruments are either hardware based synthesizers or virtual instruments within a DAW such as Ableton Live. One of my recent compositions applies these streams of data to four layers of FM synthesis engines running within the Dirtywave M8\, a technically advanced\, modern\, hardware tracker. \nMy newest work in progress\, Gradient Noise\, translates values generated by the Perlin noise algorithm into independent layers of seamless loops repeating at variable intervals. These loops are visualised as geometric forms\, abstract visualisations\, and evolving structures. The data generated is innovative because although aleatoric\, the values can be tuned to range between slowly moving gradients or rapid\, angular forms. When the sound and visuals are synchronized the performer responds not only to the animation but also to the changes in the timbre of their instruments. \nThe debut of Gradient Noise will address the themes of Innovation\, Translation\, and Participation by rethinking the relationships between musicians and machines. By translating the properties of n-dimensional Perlin noise into a musical language\, the piece presents a unified ecosystem with coordinated timbres and geometric forms. The innovation lies in generating a living environment that requires active participation and improvisation in contrast to static notation. Ultimately\, the work presents a contemporary model for computer music where the performer does not simply follow a score\, but negotiates a path through a responsive\, multi-sensory experience. \nAbout the artist\nJohn C.S. Keston is an award winning transdisciplinary artist reimagining how music\, video art\, and computer science intersect. His work both questions and embraces his backgrounds in music technology\, software development\, and improvisation leading him toward unconventional compositions that convey a spirit of discovery and exploration through the use of graphic scores\, chance and generative techniques\, analog and digital synthesis\, experimental sound design\, signal processing\, and acoustic piano. Performers are empowered to use their phonomnesis\, or sonic imaginations\, while contributing to his collaborative work. Keston founded the sound design resource\, AudioCookbook.org\, where you will find articles and documentation about his projects and research. \nJohn has spoken\, performed\, or exhibited original work at SEAMUS (2025)\, Radical Futures (2024)\, New Interfaces for Musical Expression (NIME 2022)\, the International Computer Music Conference (ICMC 2022)\, the International Digital Media Arts Conference (iDMAa 2022)\, International Sound in Science Technology and the Arts (ISSTA 2017-2019)\, Northern Spark (2011-2017)\, the Weisman Art Museum\, the Montreal Jazz Festival\, the Walker Art Center\, the Minnesota Institute of Art\, the Eyeo Festival\, INST-INT\, Echofluxx (Prague)\, and Moogfest. In 2017 he was commissioned by the Walker Art Center to compose music for former Merce Cunningham. He has appeared on more than a dozen albums\, solo albums\, and collaborative works. \nNicola Leonard Hein and Viola Yip: Fluid Ontologies\nIn “Fluid Ontologies”\, Transsonic (Nicola Leonard Hein and Viola Yip) continues to expand their intermedial artistic practice in performances. For this project\, they developed their laser feedback instruments\, using lasers as sound sources and solar panel microphones. With the incorporation of multichannel spatialization\, Transsonic extends the spatial dimensions\, sonically and visually\, creating a unique audiovisual experience. The project explores and defines new concepts of the instrumentality of light in audio circuits\, bringing together space\, bodies\, and instruments into a dynamic feedback system. \nAbout the artists\nDr. Nicola L. Hein is a sound artist\, guitarist\, composer\, researcher\, programmer\, and professor of Sound Arts and Creative Music Technology at the University of Music Lübeck.\nHe works with A.I.-assisted human-machine interaction\, postdigital lutherie\, intermedia\, sound installations\, augmented reality\, network music\,and spatial audio. His works have been realised in more than 30 countries\, at festivals such as MaerzMusik Festival\, Sonica Festival\, Experimental Intermedia etc. \nDr. Viola Yip is an experimental performer\, sound artist and instrument builder.\nHer work have been presented and supported by places such as Stanford University\, UC Berkeley\, Harvard University\, Cycling ‘74 Expo\, Hong Kong Arts Center\, Academy of Media Arts Cologne\, Academy of the Arts Berlin\, KTH Royal Institute of Technology Sweden\, Elektronmusikstudion EMS Stockholm\, NOTAM Oslo\, Arter Museum Istanbul\, Serralves Museum of Contemporary Arts Porto and Pinakothek der Moderne in Münich. \nviolayip.com \n  \nKasey Pocius: On The Edge \nOn the Edge is an audiovisual work for video\, T-Stick and surround sound. This audiovisual work explores sounds and images of objects often on the edges of perception our perceptions\, as well as processing and results from edge cases in musical algorithms and technology. \nThe piece consists of four interlayered vignettes\, exploring the behaviour and textural qualities of various edge and peak detection algorithms to create the fixed media. These files are then used for the corpus for the granular synthesis controlled by the T-Stick. The gestural data from the T-Stick is sent from Max to Ossia\, where it is used to manipulate the treatment of the video clips in real-time. \nThe technical aspects of the work consist of a fixed-media ambisonic file\, with real-time manipulation of video clips (in Ossia Score) and multichannel granular synthesis (in Max) controlled by the T-Stick. \nAbout the artist\nKasey Pocius is a gender-fluid intermedia artist and researcher based in Montreal\, teaching at Concordia and active with CIRMMT\, IDMIL\, LePARC\, and GRMS. They create electroacoustic and audiovisual works that explore interactive electronics\, spatial sound and collaborative improvisation\, with pieces programmed globally from DIY spaces to Harvard. \n  \n\n\n\n 
URL:http://icmc2026.ligeti-zentrum.de/event/club-concert-1c/
LOCATION:ligeti center\, Production Lab (10th floor)\, Veritaskai 1\, Hamburg\, 21079\, Germany
CATEGORIES:11-05,Club Concert,Music
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260512T110000
DTEND;TZID=Europe/Amsterdam:20260512T173000
DTSTAMP:20260505T121343
CREATED:20260421T181755Z
LAST-MODIFIED:20260428T112006Z
UID:10000185-1778583600-1778607000@icmc2026.ligeti-zentrum.de
SUMMARY:Listening Room 2
DESCRIPTION:Fixed Media | Program Overview\nInner Line\nHyewon Kim \nA Portrait of Kwesi Brookins\nRodney Waschka \nBiomimicry\nChun-Han Huang \nDear Beginner\nVadim D. Genin \nEncircled\nAdam Stanovic \nmight have seen\nTakumi Harada \nSilence\nZiyu Pang \nTemporal Shards\nRay Tsai \nWhen An Android Becomes Obsolete\nGiancarlo Alfonso \n\n  \nAbout the pieces & artists\nHyewon Kim: Inner Line\nThe self is not something fixed\, but rather a moving line that is constructed through continuous interaction and emotional attunement with others. However\, in personality disorders\, this system of connection is damaged. Mirror neurons function dimly\, and the ability to share emotions by following another’s gaze or to attune feelings toward the same object becomes dulled. These individuals either fail to perceive the distance between ‘me’ and ‘you’ or become extremely conscious of it\, unable to live together in a shared world. ‘Inner Line’ explores these internal fractures and fragments the audience’s gaze and emotions. The piece is structured around an unstable relationship between live performers\, where accumulated interactions gradually alter the percussionist’s playing rather than producing immediate disruption. Spatially\, performers and sound sources are distributed across the stage and loudspeaker field\, preventing the audience from occupying a single\, optimal listening position. It does not control or design the audience’s emotional responses\, but rather leaves them to interpret and explain for themselves what emotions they experienced. And it reveals that this imperfect sensation itself is the essence of how we connect with others. Through this work\, rather than sharpening the boundaries of the inner self\, I aim to identify where our gaze has been fixed and to unravel that rigid thinking. \nAbout the artist\nHyewon Kim (b. June 1989) is a composer based in Seoul\, South Korea. She treats all vibrating objects as equal musical material\, working primarily with percussion and electroacoustic media. She also conceives and produces sound-based exhibitions in which audiences directly experience the inherent vibrations of materials. She earned her B.A. and M.A. in composition from Chugye University for the Arts\, studying with Sungjun Moon\, and is an active member of the Korean Electro-Acoustic Music Society (KEAMS). Her works have been presented at international and domestic festivals\, including ICMC. \n  \nRodney Waschka: A Portrait of Kwesi Brookins\nA Portrait of Kwesi Brookins is one of a series of computer music acousmatic portraits of artists\, composers\, and friends. Dr. Brookins\, a former professor of psychology and Africana Studies at North Carolina State University\, now serves as a Vice Provost at his alma mater\, Michigan State University. This sound portrait makes use of a recording of his rich\, sonorous voice saying his name and (then) position and naming his main area of work – child welfare. The piece also uses a public domain melody from Ghana\, a country Dr. Brookins has visited often as a scholar and pedagogue. \nAbout the artist\nRodney Waschka II is probably best known for his algorithmic compositions and his unusual operas. His music has been called “astonishing” and “strikingly charismatic” by Paris Transatlantic Magazine\, “a milestone in the repertoire” by Computer Music Journal\, “fluent and entertaining” by Musical Opinion of London\, and “oddly moving” by Journal Seamus. His mentors include Larry Austin\, Robert Ashley\, Paul Berg\, Clarence Barlow\, Konrad Boehmer\, Thomas Clark\, Charles Dodge\, and George Lewis. Waschka is Director and Professor of Arts Studies at North Carolina State University. \n  \nChun-Han Huang: Biomimicry\nBiomimicry is an electroacoustic composition constructed entirely from synthetic sound sources. Originating as a technical exploration within the Max programming environment\, the work emulates natural sonic phenomena—including weather patterns and animal vocalizations—without the use of field recordings or sampling. By reconstructing organic textures through digital synthesis\, the piece navigates the boundary between the artificial and the natural\, creating immersive soundscapes that range from ambient subtlety to chaotic intensity. \nAbout the artist\nChun-Han Huang (b. 2002) is a composer and sound artist based in Taiwan. He is currently a graduate student majoring in Computer Music at the Institute of Music\, National Yang Ming Chiao Tung University (NYCU). His creative practice focuses on electroacoustic composition and sound design\, exploring the intersection of organic sound sources and digital signal processing. \n  \nVadim D. Genin: Dear Beginner\nThe idea behind the piece is to create a relationship between the live performer and the fixed electronics\, as if the performer were relying on the signals the electronics were sending. All together\, it could resemble the process of getting to know the interface of some new\, unfamiliar equipment\, with the task becoming increasingly complex. The electronics are composed of sounds extracted from videos that have been accumulating in the smartphone’s memory for years. Moreover\, the selected samples are moments during which nothing happens in the video\, that is\, the most unnecessary garbage sounds. Finding sounds that are not interesting in their usual form and using them for musical purposes is an interesting challenge for the composer. \nAbout the artist\nVadim D. Genin. Born November 14\, 1993. Composer\, sound-artist\, PhD degree in Physics and Mathematics. Graduate of the Saratov State Conservatory and Saratov State University. Major projects are the video game opera “The World of Wondrous Rooms” and the documentary cantata “The Restorer”. Participant of festivals such as impuls (Austria)\, IDEA IWYC (Bulgaria)\, ARCo (France)\, CEME (Israel)\, ilSuono (Italy)\, Meridian (Romania)\, Teden Sodobne Glasbe Bled (Slovenia)\, Encontres de Compositors (Spain)\, reMusik.org (Russia)\, ICMC (USA). \n  \nAdam Stanovic: Encircled\nIn early 2025\, staff and students from the Sound and Music Programme\, LCC\, travelled to the London Wetlands Centre (part of the Wildfowl and Wetlands Trust (WWT)) to make recordings of the site and its surrounding areas. The project was inspired the long-running SoundLapse project at the Universidad Austral de Chile\, where recordings of the wetlands around Valdivia have inspired ecological\, educational\, and creative research. During the course our project\, we met with our counterparts in Chile and learned about their interests\, methods\, and research goals. We also met with staff and the London Wetlands Centre\, and heard about the rapid decline in global wetland environments and their plans to create 100\,000 hectares of sustainable wetlands in the UK. Arriving at the London Wetland Centre\, on the morning of the 11th Feb 2025\, I was immediately struck by the relative silence. For over an hour I had battled my way through the bustle of central London\, travelling by bus\, tube\, and train. And suddenly\, I was surrounded by stark winter trees. I could hear my feet crunching on the stony paths\, and a distant crow cawing. For a moment\, it felt like an oasis of calm. But as my ears acclimatised\, I realised that London\, the great metropolis\, was ever-present. I could hear it rumbling in the distance\, interrupted only by the roar of overhead planes bound for Heathrow. The more I tried to focus on the sounds of the centre itself\, the more aware I became of the monstrous city beyond… We were\, I felt\, encircled… Initially\, this piece set out to traverse city and Centre\, transitioning from the chaos of one to the calm of the other. As the piece developed\, however\, I started to realise that it is not simply the London wetlands that are encircled; although their geographies are vastly different\, most of the world’s wetlands are\, in one way or another\, equally encircled… \nAbout the artist\nAdam Stanović composes music with recorded sound. In recent years\, his music has drawn from both studio and location recordings\, using both digital and analogue technologies. To date\, he has won prizes\, residencies\, and mentions in over 40 international composition competitions\, had his music performed in over 700 international concerts\, and published works on 16 different albums\, including three solo albums (on the Sargasso and Empreintes DIGITALes record labels). Adam is Dean of Screen\, University of the Arts\, London. For more information\, visit: www.adamstanovic.com \n  \nTakumi Harada: might have seen\nThis piece is composed primarily of sounds obtained through field recording. Originally\, these sounds were recorded in a variety of locations and may appear to lack a clear contextual relationship. However\, through processes of transformation and manipulation\, their conventional sonic characteristics are dismantled. As a result\, latent commonalities inherent in the sound materials emerge\, generating connections among them and forming a unified continuity within the work as a whole. \nAbout the artist\nTakumi Harada. Born in 2000. From Tokyo\, Japan. Currently enrolled at Kunitachi College of Music. Began producing works in 2025. \n  \nZiyu Pang: Silence\nIn this age of information overload\, countless voices swirl around us. Yet rather than merely adding our own\, there are times when we simply wish to remain silent. Within that silence\, everything is expressed. This work employs minimal sound material\, dissecting the passage of time into numerous fragments of silence. It aims to strip away all superfluity\, reaching the most fundamental tranquillity of sound and spirit\, expressing the Eastern philosophical notion that ‘silence is golden’. \nAbout the artist\nZiyu Pang\, March 29\, 2005\, Third-year undergraduate student majoring in Music Sound Direction at the Wuhan Conservatory of Music \n  \nRay Tsai: Temporal Shards\nTemporal Shards is a short electroacoustic work that explores fragmented experiences of time within everyday perception. Through brief\, distorted frequencies shaped by compression\, reversal\, and abrupt interruption\, the piece unfolds fleeting moments emerging from a narrow temporal fissure. Temporal flow becomes unpredictable as acceleration\, suspension\, and sudden collapse coexist\, leaving behind transient perceptual traces that resist linear progression and stable structure. \nAbout the artist\nRay Tsai (Tsai Yi-Jui)\, born in Hsinchu and currently studying at National Yang Ming Chiao Tung University\, is a DJ\, music producer\, and new media artist. His work spans sound art\, electroacoustic music\, and video installation\, using experimental sonic structures to explore the relationship between technology and perception. Under the alias †Egothy†\, he is active in the underground electronic music scene\, performing noise\, deconstructed electronics\, and other avant-garde styles that shape sensory experiences oscillating between chaos and order. \n  \nGiancarlo Alfonso: When An Android Becomes Obsolete\nThis musical composition explores the concept of technological and computational obsolescence\, comparing it to a reflection on human obsolescence and human labor\, whether mechanical\, artistic\, or intellectual. In a social context increasingly oriented towards efficiency and automation\, human beings find themselves in a state of constant competitive rivalry with artificial systems designed to be faster\, tireless\, and free from the weaknesses that characterize human labor. The composer makes use of the figure of an android that has reached the end of its life cycle\, as a metaphor for this condition. Throughout the composition\, the machine’s final moments of activity are evoked\, during which states of confusion\, anger\, despair\, and acceptance emerge. These states do not represent real emotions\, but rather the result of calculations and assessments of its own operational status\, which contributes to humanizing the android by placing it on the same level as the human beings. From a compositional perspective\, the piece is mainly based on granulation techniques\, accompanied to a lesser extent by subtractive and additive synthesis. The sound material consists mainly of concrete mechanical and metallic sounds\, which are then granulated and processed\, along with cold\, more abstract\, unstable\, and unnatural synthetic sounds. The composition develops as a continuous friction between mechanical and rhythmic rigidity and sonic and timbral neuroticism\, suggesting the progressive deterioration of the physical and mental functions of the android protagonist. \nAbout the artist\nGiancarlo Alfonso\, (born on 14 June 2000) is a composer and electroacoustic music student at the Conservatorio \n 
URL:http://icmc2026.ligeti-zentrum.de/event/listening-room-2-2/
LOCATION:Hamburg University of Technology\, Building A (A 0.14)\, Am Schwarzenberg-Campus 1\, Hamburg\, 21073\, Germany
CATEGORIES:12-05,Listening Room,Music
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260512T110000
DTEND;TZID=Europe/Amsterdam:20260512T173000
DTSTAMP:20260505T121343
CREATED:20260421T184536Z
LAST-MODIFIED:20260428T110819Z
UID:10000180-1778583600-1778607000@icmc2026.ligeti-zentrum.de
SUMMARY:Listening Room 1
DESCRIPTION:Fixed Media | Program Overview\nSeething Field: Imprint\nSam Wells \nContours of Anxiety\nZihan Wang and Wenxin Zhou \nDreams of the Jailed Refugee\nRobert Sazdov \nflusso_sonoro_1\nSebastiano Naturali \nGalactic Railroad\nYunze Mu \nIdeale Landschaft Nr. 6\nClemens von Reusner \nIncarnations\nYoungjae Cho \nInformation Body Horror\nPrimrose Ohling \nJardín de Luz\nIván Ferrer-Orozco \nNon è un atlante di traiettorie algo-siderali\nAndrea Laudante\, Paolo Montella and Giuseppe Pisano \nNor’wester\nTeerath Majumder \nOcean Reflection\nYu Qin \nrain contained\, rain contains…\nWei Yang \nSAW\nGabriel Araújo \nWild Fruits: Epilogue\nJames Harley \nInside the metal plate\nRaul Masu and Francesco Ardan Dal Ri \n\nAbout the pieces & artists\nSam Wells: Seething Field: Imprint\nSeething Field: Imprint is a turbulent interplay of memory and resonance\, written for seventh-order ambisonics fixed media. The work unfolds through the filtering\, modulation\, distortion\, and reverberation of a time-stretched recording of Jack Kerouac. Using ambisonic impulse responses for the Chapel of the Four Chaplains\, located in the basement of Temple Performing Arts Center\, Seething Field: Imprint harnesses the Chapel’s reverberant and sonic characteristics\, a space dedicated to four chaplains who sacrificed their lives on the USS Dorchester—a ship Kerouac once served on but was recalled from before its tragic sinking. The source material for Seething Field is a brief recording of Kerouac speaking “The Ocean\,” time-stretched 512 times from about 1.5 seconds to over 10 minutes. This slow unfolding of speech provides the formal and harmonic structure of the work. The stretched recording was then recursively recorded through the Chapel’s reverb\, a process akin to Alvin Lucier’s I Am Sitting in a Room\, revealing and reinforcing the shared resonant frequencies of Kerouac’s voice and the Chapel dedicated to the Four Chaplains. The title draws from the closing lines of Kerouac’s The Sea is My Brother\, where ‘the sea stretched a seething field which grew darker as it merged with the lowering sky.’ Seething Field mirrors this seascape\, embodying the darkness\, expansiveness\, and tension of turbulent times\, past and present. \nAbout the artist\nSam Wells (Philadelphia) is a musician and artist whose work explores breath\, space\, and embodiment across acoustic\, electronic\, and multimedia forms. As trumpeter and improviser\, he has performed internationally with ensembles including Aeroidio\, Miller/Vidiksis/Wells Trio\, and SPLICE Ensemble. His compositions have been performed widely in the U.S. and abroad. Wells is a Cycling ’74 Max Certified Trainer and an Assistant Professor of Music Technology and Composition at Temple University. \n  \nZihan Wang and Wenxin Zhou: Contours of Anxiety\nThis work draws inspiration from my own experience and understanding of anxiety\, attempting to express it through immersive sonic work. I perceive this emotional state as an ever-shifting form: evoked\, diffused\, released\, and finally calmed. It should be clarified that this is not designed according to rigorous psychological science but rather leans towards an expression of my personal emotional experience. Throughout the piece\, characteristics of this psychological journey: suppression\, constraint\, and release are mapped onto frequency density\, timbre\, texture\, and the spatial tension between acoustic elements. From a compositional perspective\, timbre spatialisation functions as the primary technical and expressive strategy. This involves decomposing sound according to its spectral characteristics and distributing these components across distinct spatial locations (Normandeau\, 2009). Within this framework\, ambisonics operates as the spatialisation method\, whereby its manipulable parameters generate distinctive timbral and spatial effects. These elements collectively construct the work’s internal emotional narrative\, integrating spatial parameters as essential compositional materials rather than superimposed effects. \nAbout the artists\nZihan Wang (07/12/2000) is an electroacoustic music composer\, film composer\, and sonic artist. He is currently a post graduate research student at Monash University\,Melbourne\, Australia\, where his work investigates compositional strategies for ambisonics-based environments. His research engages with Robert Normandeau’s concept of timbre spatialisation and Denis Smalley’s theory of spectromorphology\, with a particular emphasis on timbre\, spatial articulation\, and electroacoustic composition. His creative practice includes fixed-media electroacoustic works\, sound installations\, animated score composition\, and film scoring. His work has been presented at venues and conferences including TENOR 2025 and the Melbourne International Film Festival (MIFF). Wenxin Zhou is a composer specializing in electroacoustic and interactive media music. She holds a Bachelor of Composition and Music Production from the Australian Institute of Music and graduated with a Distinction in the Master of Composition for Electroacoustic and Interactive Music from the University of Manchester. Her creative practice focuses on exploring the transformation and fusion between real-world sounds and electronic soundscapes. \nWenxin Zhou \n  \nRobert Sazdov: Dreams of the Jailed Refugee\n‘Dreams of the Jailed Refugee’ (2023-25) is the final work of the ‘Dreams of the Jailed’ trilogy. This final instalment extends the trilogy’s engagement with statelessness and incarceration\, attending to the psycho-emotional toll borne by refugees subjected to carceral regimes — both literal and systemic — under global structures of war\, famine\, and economic precarity. Composed for fixed media\, the work utilises the acousmatic form to foreground disembodied sonic presences\, suggesting the persistence of memory and agency even under conditions of profound erasure. The absence of visual referents invites the listener into a mediated interiority — a sound world shaped by fragmented dreams\, longing\, and dislocation. Dreams of the Jailed Refugee proposes a mode of listening that is politically charged and ethically attuned. It seeks to destabilise hegemonic narratives of migration by offering a counter-sonic space in which refugee subjectivities are not merely represented\, but sonically enacted. In doing so\, the work aligns with broader decolonial and posthumanist currents in contemporary sonic arts practice\, where listening becomes an act of recognition and resistance. \nAbout the artist\nRobert Sazdov is a composer\, music producer\, and academic. He is currently Associate Professor at the University of Technology of Sydney (UTS) in Music and Sound Design\, where he also served as Head of Music and Sound Design (2018-2024). Sazdov’s compositions and productions have received notable prizes and awards from various organizations and institutions\, including: Daegu International Computer Music Festival\, International Composition Competition Città di Udine\, ‘Pierre Schaeffer’ Competition\, Musica Nova Competition\, Sonic Arts Awards\, Bourges International Competition\, Just Plain Folks Music Awards\, and the Audio Engineering Society. Sazdov’s music has been released by Capstone Records\, Vox Novus\, Accademia Musicale Pescarese\, Society for Electroacoustic Music\, Australasian Computer Music Association\, Sonic Arts Awards and SoundLab Channel. He has undertaken residencies at the Erich-Thienhaus-Institue\, Detmold University (2012)\, The Sonic Lab\, Sonic Arts Research Centre\, Queen Mary University (2007)\, and at SPIRAL – University of Huddersfield (2023). He was also a Visiting Research Fellow at Applied Psychoacoustics Laboratory – University of Huddersfield (2023)\, Institute of Electronic Music and Acoustics – Graz (2023)\, and The Sonic Lab\, Sonic Arts Research Centre\, Queen Mary University (2023). \n  \nSebastiano Naturali: flusso_sonoro_1\nflusso_sonoro_1 is a fixed-media electroacoustic composition that explores sound as a fluid\, continuously transforming entity\, oscillating between density and transparency. The work invites the listener to perceive sound both as an uninterrupted flow and as a succession of interruptions\, turbulences\, and suspensions that shape its trajectory. The piece reflects on temporal perception and memory through the interaction between microsonic detail and large-scale form. Recorded and synthetic sound materials are intertwined and progressively transformed\, emphasizing the tension between natural sound qualities and electronic abstraction. Rhythmic structures derived from iterative transformations and masking processes generate evolving layers that gradually increase in complexity and density. A central section focuses on sustained textures in the mid–low frequency range\, combining stretched vocal layers and processed high-frequency elements to create a suspended sonic environment. In the final section\, earlier rhythmic materials re-emerge and are subjected to global processing techniques\, including digital silence and spectral degradation. The work was composed using Ableton Live and Max for Live tools. It is presented as a stereo fixed-media piece\, with a quadraphonic version also available. \nAbout the artists\nSebastiano Naturali (born 15 February 2006) is an Italian composer and guitarist working in the field of electroacoustic and electronic music. He is currently pursuing undergraduate studies in Electronic Music and Classical Guitar at the Conservatory of Potenza. His work focuses on sound transformation\, spatial audio\, and practice-based artistic research using digital music systems. \n  \nYunze Mu: Galactic Railroad\n“Merry meet\, merry part.” At the end of a 4-year relationship\, I started to think about why people meet if we’ll eventually separate and what the true happiness or the final destination of everyone is. I found no answer. However\, this piece is somehow the record of my thinking during that period. This piece was inspired by the book “Night on the Galactic Railroad” by Japanese author Kenji Miyazawa. In my imagination\, the Milky Way is full of steam trains. They meet\, run together\, and separate eventually. None of them knows where the destination is\, they just keep running toward somewhere\, restlessly. Does it matter if you know where the destination is? Maybe not. Are all the experiences more valuable than the end? Maybe so. The only thing I know is\, I’ll always keep running just like those steam trains\, no matter what. \nAbout the artist\nYunze Mu is a composer\, sound artist and music programmer based in Louisville\, Kentucky. He currently teaching at University of Louisville\, School of Music as Assistant Professor. He received a DMA (Doctor of Musical Arts) in Composition at the College-Conservatory of Music\, University of Cincinnati\, where he studies computer music with Mara Helmuth\, teaches introductory courses in electronic music\, and works on his web-based music application\, Web RTcmix. Mu holds a bachelor’s degree in music composition from Central Conservatory of Music\, Beijing\, China. His music\, papers\, and VR installations have been shown and performed at numerous events and conferences\, such as NIME\, ICMC\, SEAMUS\, NYC Electronic Music Festival\, and venues in China\, Poland\, France\, United States\, and Korea. \n  \nClemens von Reusner: Ideale Landschaft Nr. 6\nThe manifold real (sound)-landscapes have been themes in the arts again and again over the course of time. Special approaches can be found in so-called “ideal landscapes”\, namely in European landscape painting of the 17th and 18th centuries. The 8-channel electroacoustic composition “Ideal Landscape No. 6” is inspired by these constructed\, calm but non-real landscapes of European landscape painting as well as by an etching by the German artist N.N. It is the 6th sheet of his cycle “Variations in G”\, which has no title of its own. Although the composition is not about the “setting to music” of a graphic model\, there are structural similarities between the two works. The sound material is abstract sounds produced with synthesisers and as well calculated with Csound\, a programming language for sound synthesis\, which were created through additive and subtractive sound synthesis. \nAbout the artist\nClemens von Reusner is a composer based in Germany. His works of electroacoustic music and radiophonic audio pieces focus equally on purely electronically generated sounds as well as sounds found in special places and processed in the studio. He is a member of the “Academy of German Music Authors” and he has received national and international awards for his compositions. They are performed worldwide at renowned international festivals of contemporary music. \n  \nYoungjae Cho: Incarnations\nThis work is the first piece in a series that employs higher-order Ambisonics\, focusing on the creation of an immersive environment through 3D audio. The composition is based on field recordings captured using Ambisonic microphones of various orders\, which serve as the primary material for spatial composition. The work explores the relationship between recorded soundscapes and temporal contexts by constructing a narrative structure that connects imagined past events\, present bodily experiences\, and speculative future occurrences. Through this approach\, spatial sound is treated not only as an acoustic phenomenon but also as a medium for linking time\, memory\, and place within an immersive listening environment. \nAbout the artist\nYoungjae Cho (1990) is a composer based in Bremen\, Germany\, and Korea. His work includes solo and chamber music\, electroacoustic music\, and live electronics\, focusing on immersive spatial sound through multichannel audio systems. Presented at international festivals such as DEGEM\, ZKM\, ICMC\, and ORF Musikprotokoll\, his music received the Gold Award at the IEM & VDT Student 3D Audio Production Competition\, and he was an Artist in Residence at ICST Zurich. \n  \nPrimrose Ohling: Information Body Horror\nPrivileging patterns of information over its instantiation is a promise first conceived of within the field of cybernetics and popularized through science fiction. Is disembodiment truly liberatory? Information Body Horror (IBH) was written to leverage the experience of having a body that is abstracted\, debated\, and legislated. Lived experience is twisted to force rhetoric and justify legislation\, displacing and endangering individuals and communities. IBH started with a claim of self-expression and agency through an improvised recording on modular resulting in ‘lived material’. The artist finds that improvisation with any instrument is an embodying experience where their expression is in its purest form. How much can you alter recordings before they lose their original meaning? Should you alter them to begin with? The artist not only manipulates recordings but forces a layer of digital alterations. The use of sampling obscures source material in the meso timescale and a form of algorithmic micro-sampling through pitch shifting results in violent granular fractalizations. The act of sound design and composition becomes a reenactment of how external discourse overwrites lived reality. The stems are then mixed to 8 channels in Max/MSP reflecting the propagation of external discourse through institutional channels. The creative decisions in mixing are to ensure the abstracted material is engineered to outperform and silence the lived material. This stage is where IBH is codified and written to memory. Finally\, abstractions are instantiated through 8 speakers\, a setup largely reserved for professional environments\, hindering public engagement. However\, those who can listen will have differing experiences depending on where they are in the listening environment. The setup surrounds listeners\, it sonically reaches out and presses on them\, observing them. To call disembodiment in this case liberatory dismisses the lived and emboldens the abstracted material. I have found through this process that a separate disembodiment\, of self by self\, is impossible. It has only spoken to the complexity of selfhood. In the first section let the dense textures sink in as they swirl and oscillate in space. Secondly\, synthesized voices will call out to you from their digital void. In the final section\, sounds reflect ocean waves and wind\, natural patterns reclaiming space through digital noise. The textures eventually ease and lighten but this is not peace. It is endurance. The tides cycle. The winds change. They continue regardless. \nAbout the artist\nPrimrose Ohling b. 2002 is a musician\, multimedia artist\, and coder. She is drawn to rhythms\, textures\, and the dichotomy between improvisation and the precision of digital electronics. Her foundation is in jazz saxophone and improvisation. She continues to explore that side of her artistry\, letting it influence her work. She finds inspiration in reconstituting familiar sounds\, creating immersive and evolving soundscapes. Her music explores transformation\, inviting listeners into a fluid auditory experience. Recently\, her focus has been on modular synthesis\, where she utilizes digital modules and custom DSP algorithms\, adding further depth to her distinctive style. As a trans artist\, her work often engages with themes of embodiment\, bodily autonomy\, and the violence of abstraction. \n  \nIván Ferrer-Orozco: Jardín de Luz\nJardín de Luz (2021) is based exclusively on the Debris Project sound database\, comprising over 2\,000 samples. An algorithm using musical descriptors selects materials that are further developed through sampling and synthesis\, generating a new category of sounds termed hybrids. Conceived as a form of sonic gardening\, the work organises these materials within the acoustic space. Light operates as a metaphor for the listener’s disposition to listen\, and the garden as a heterotopic sonic space. \nAbout the artist\nIván Ferrer-Orozco (Mexico City\, 1976) is a composer\, electronic media performer and computer music designer. His music has been performed extensively in festivals and by ensembles from Mexico\, Spain\, Canada\, Argentina\, Ecuador\, Chile\, South Korea\, France\, Hong Kong\, Vietnam\, Japan\, USA\, Germany\, Ireland\, Portugal\, Italy\, and Cyprus. He has been artist in residence at: Akademie der Künste Berlin\, Schleswig-Holsteinisches Künstlerhaus\, Residencia de Estudiantes\, Camargo Foundation\, MacDowell Colony\, Djerassi\, CMMAS\, Hooyong Performing Arts Centre\, ARTos Foundation\, Ibermusicas\, I-Portonus\, Conseil des Arts et Lettres du Québec\, among others. As electronic media performer he performs as soloist and as sideman with artists and ensembles from Spain and abroad. In 2019 and 2024\, the Mexican government appointed him to the Sistema Nacional de Creadores de Arte\, a national programme that awards outstanding artists from all disciplines. He was a member of Neopercusion\, Madrid based contemporary ensemble\, currently he is a member of Vertixe Sonora Ensemble and Synergein Project. He has been part of the Forms of Culture Research Group at the Study Programme in Critical Museology\, Artistic Research Practices and Cultural Studies of the National Museum and Arts Centre Reina Sofia. He has been awarded with the 2021 Best Music Award of the International Computer Music Association. \n  \nAndrea Laudante\, Paolo Montella and Giuseppe Pisano: Non è un atlante di traiettorie algo-siderali\nThis piece is neither an exploration of movement nor a calculated map. Instead\, it invites the listener to experience a sonic drift\, propelled by precise mathematical rules. Large sound masses govern the flow\, turning slowly with heavy inertia\, while sharper\, faster sounds cut through the space\, leaving vivid acoustic traces. The resulting soundscape is a complex web of intersecting paths. Musical fragments pulse rhythmically\, creating a changing geometry of sound that expands and contracts around the audience. Originally composed in High Order Ambisonics\, this work was forged collectively—through shared practices\, exchanged sounds\, and the unpredictable alchemy of collaboration\, allowing the music to evolve in ways no single mind could anticipate. \nAbout the artists\ntotaleee is a trio of composers of acousmatic music and laptop performers consisting of Giuseppe Pisano-Riise (1990)\, Andrea Laudante (1993)\, and Paolo Montella (1986). In their composition work they use immersive audio technologies to create fictional environments of plausible and impossible nature. This is done through the use of multichannel synthesis techniques\, physical modeling of room acoustics\, field recordings\, and feedback loops. The trio debuted with their first piece ‘Non è un compendio di Etologia numerico-digitale’ in 2023 and since then their works have been played in many different contexts including ICMC (2023 Shenzhen\, 2024 Seoul)\, Ircam (Paris)\, Sonosfera (Pesaro)\, ACMC (Sydney)\, Prix CIME\, WOCMAT (Taipei) and many more. They have also received awards such as the first prize at ISAC 2024 (International Sonosfera Ambisonics Competition)\, the Teresa Rampazzi Award at CIM XXIV and a Distinction per Category at CIME 2023. totaleee is the first stable project to emerge from Napoli Totale Elettronica [NTE]: an open and fluid composers’ society that embraces the collective electroacoustic works produced by affiliated artists from and/or based in Naples. Connected to the NTE collective are the DIY portable loudspeaker array VOLTA and the festival for multimedia arts Marginale. \n  \nTeerath Majumder: Nor’wester\nThe Bengali new year (mid-April on the Gregorian calendar) invariably brings with it violent storms near the Bay of Bengal. We call them Kal Baishakhi. They can wreak havoc on people’s daily lives\, damage crops\, cause floods\, and displace people. The suffering is considerable. Yet\, somehow\, knowing that these cataclysmic events are inevitable helps generate acceptance. We know what nature has in store for us\, the destruction it will cause; we also know it will pass. It is all part of the cycle. During a particularly difficult time of my life when I was reconciling with several grave losses\, the Kal Baishakhi and our acceptance of it was inspiring. It helped me see the bigger picture beyond the carnage\, the impermanence of everything\, and the strength in us to grieve and overcome loss. Nor’wester is a depiction of not just the dynamics of a storm but also our tumultuous experience of it. It is gradual and sudden\, momentary and eternal\, stationary and chaotic. These are some of the qualities that have been expressed in this piece through electronic timbres and spatialization. The piece was written using a range of generative processes that gave rise to complex timbres. The sounds were modeled using wavetable and granular synthesizers along with careful parameter randomization. No audio samples were used. The piece also explores different combinations of polyrhythmic patterns that fit within a fixed cycle length. Moving frequently between these combinations often creates a disorienting effect while maintaining a grid-like rhythmic quality. The sounds then went through first-order ambisonic encoding (mono\, stereo and granular). Ambisonic effects such as delay\, reverb and compression were applied during mixing. The ambisonic master was then decoded for octaphonic playback. \nAbout the artist\nTeerath Majumder is a Bangladeshi composer and technologist who works in interactive and immersive media\, computer music\, and sound design. He questions socio-sonic dynamics that are often taken for granted and reimagines relationships between participants through technological mediation. In 2025\, he created Do Not Feed the Robots\, a participatory concert involving a range of “interactive objects” and automatons. In the same vein\, his 2022 work Space Within fostered collaboration between audience members and featured musicians using his “interactive objects.” His collaboration with Nicole Mitchell resulted in the immersive sound installation Mothership Calling (2021) that was exhibited at the Oakland Museum of California. He composed and designed sound for Qianru Li’s immersive multimedia piece A Shot in the Dark (2023) that explored Asian-American identity in the face of anti-Black police violence with reference to the shooting of Akai Gurley in 2014. His compositions have been performed by Hub New Music\, Transient Canvas\, and London Firebird Orchestra among other ensembles. He often collaborates with dancers and filmmakers in various capacities and produces genre-bending electronic music for his studio projects. \n  \nYu Qin: Ocean Reflection\nOcean Reflection is a 22-minute sonic journey exploring the ocean as a vast system of hidden energy operating on temporal scales far beyond human perception. Drawing on field recordings from the North Sea—including both oceanic soundscapes and offshore drilling infrastructure—the electronics function as a structural\, time-based layer that is organically coordinated with the music through the piece. Sustained harmonic fields and slow-form processes in music evoke the ocean’s apparent calm and depth\, while industrial sounds gradually surface\, revealing human intervention not as an immediate rupture but as a long-term disturbance embedded within marine systems. Ocean Reflection invites listeners to reflect on scale\, time\, and the asymmetry between human activity and ecological response. \nAbout the artist\nYu (Hayley) Qin is a composer and improviser\, currently a PhD candidate at UC Irvine\, whose work weaves music\, dance\, and digital technologies into immersive\, interdisciplinary experiences. Drawing inspiration from marine environments\, human psychology\, and neuroscience\, her creations explore hidden energies\, human-nature interplay\, and collective imagination. Her works have been performed across North America and East Asia. \n  \nWei Yang: rain contained\, rain contains…\n“rain contained\, rain contains…” is a fixed-media piece exploring the close relationship between everyday objects and nature. Made from sounds of bottles\, tubes\, and rain\, it invites listeners to discover sonic connections and containment between the profound and the ordinary. Bottles and resonant tubes act as vessels of containment\, symbolically setting boundaries. Yet\, by performing them\, their distinctive sounds—clinks\, taps\, and resonances—uncover hidden textures and melodic fragments that break through the physical. Rain\, a fundamental element\, unifies the soundscape. While shaping the acoustic environment\, the rain also consists of drops\, each of which can be contained and has its own unique sonic profile. Through careful transformation and juxtaposition\, the piece highlights the shared granular qualities that allow bottle and tube sounds to seamlessly transform into rain\, and vice versa\, blending the domestic and the natural. This sonic alchemy explores how these elements “find and contain each other\,” fostering an “ecological listening” that reveals the deep interconnectedness of our everyday objects and the natural world. Various signal-processing techniques were employed to achieve a wide range of sonic materials and spatial transformations\, including granular synthesis\, filtering\, spherical-angular decomposition/recomposition\, a custom spherical-cap order upmixer\, a custom reverb with feedback delay networks\, and more. \nAbout the artist\nWei Yang is a composer/sound artist from China. He works with different media\, through which he often contemplates the body’s role in sound production\, sound in space\, as well as the integration of various data from the performance environment (reverberation\, light\, etc.). Wei composes both instrumental and electronic music\, and often incorporates various sensors and physical computing to build performative systems that allow dynamic interaction among different actors within the system. His works have been performed internationally at various events\, including the Darmstadt Summer Festival\, Salzburg Music Festival\, BEAST Festival\, NUNC!\, ICMC\, ISAC Sonosfera\, Tomeistertagung\, ORF Musikprotokoll\, the San Francisco Tape Music Festival\, SEAMUS\, Espacious Sonores\, Festival Atemporánea\, Nucleo Música Nova SiMN\, Sound Image Festival\, and Ars Electronica. \n  \nGabriel Araújo: SAW\nA hyperrealistic space of bees\, motors\, and sawtooth waves. The piece focuses on the commonalities of these sounds and explores the constant transformation of materials\, between the natural and the artificial\, the real and the impossible\, the biological\, the mechanical\, and the fantastical. \nAbout the artist\nGabriel Araújo. Composer\, multimedia artist\, and educator whose work bridges ecological\, technological\, and cultural models through sound\, video\, and transmedia pieces. He is Assistant Professor of Music Technology at Texas A&M University – Central Texas. Gabriel studied composition with Paulo Guicheney at the Universidade Federal de Goiás (Brasil)\, and obtained his master’s degree from the CNSMD de Lyon (France)\, where he studied with Michele Tadini and attended the classes of Martin Matalon and François Roux. He completed his DMA at the University of Texas at Austin under Januibe Tejera\, where he served as Assistant Intructor for the Experimental and Electronic Music Studio. ​ He received the Funarte composition prize from the Brazilian Ministry of Culture at the Biennial of Contemporary Brazilian Music\, the Rainwater Innovation Grant\, and was a finalist at Prix CIME/ICEM and MA/IN Awards. He has collaborated with performers such as PHACE Ensemble\, Vertixe Sonora\, HANATSUmiroir\, Line Upon Line Percussion\, the Orchestra of the National Opera of Lyon\, Soundmap Ensemble\, Atelier xx-21\, Olivier Stankiewicz\, Alice Belugou\, and have been featured at festivals such as MA/IN Festival (IT)\, Ars Electronica Forum Wallis (SWI)\, MUSLAB (ECU)\, SEAMUS (US)\, Lontano (BR)\, Plurisons (BR)\, CNMAT (US)\, Empreintes (FR)\, Electric LaTex (US)\, Festival No Conventional (Colombia).​ \n  \nJames Harley: Wild Fruits: Epilogue\nWild Fruits 5: Epilogue is an electroacoustic soundscape work from the Wild Fruits cycle\, begun in 2003. The piece includes spoken text taken from Wild Fruits by Henry David Thoreau\, recorded by Jim Bartruff\, and Pilgrim at Tinker Creek by Annie Dillard\, recorded by Anne-Marie Donovan. The sounds are all based on field recordings from various locations\, processed in the studio. Originally conceived as an 8-channel surround-sound work\, Epilogue uses material from the other works in the cycle\, treated in new ways. \nAbout the artist\nJames Harley is a Canadian composer teaching at the University of Guelph. He obtained his doctorate at McGill University in 1994\, after spending six years (1982-88) composing and studying in Europe (London\, Paris\, Warsaw). His music has been awarded prizes in Canada\, USA\, UK\, France\, Austria\, Poland\, Japan\, and has been performed and broadcast around the world. Recordings include: Neue Bilder (Centrediscs\, 2010)\, ~spin~: Like a ragged flock (ADAPPS DVD\, 2015)\, Experimental Music for Ensembles\, Drums\, and Electronics\, with Philippe Hode-Keyser (ADAPP CD\, 2022)\, Lithophonica\, with Gayle Young (Farpoint 2025) . As a researcher\, Harley has written extensively on contemporary music. His books include: Xenakis: His Life in Music (Routledge\, 2004)\, and Iannis Xenakis: Kraanerg (Ashgate\, 2015). As a performer\, Harley has a background in jazz\, and has most recently worked as an interactive computer musician. \n  \nRaul Masu and Francesco Ardan Dal Ri: Inside the metal plate\nThis 5.1 acousmatic work is entirely constructed from the resonant behaviour of a single metal plate\, activated through a set of physical and acoustic excitations. All sound material is generated via controlled feedback processes\, bowing\, mallets\, and additional excitation techniques that probe the material responses and instabilities of the plate. Feedback is not employed as an effect\, but as a generative mechanism\, where the plate\, transducers\, amplification\, and acoustic space form a dynamic system capable of producing emergent sonic behaviours. The resulting sounds do not represent the plate\, but rather make audible its internal activity\, thresholds of stability\, and variations in resonant response. The 5.1 spatial distribution places these sounds around the audience with the intention of situating the listener inside the resonant body itself. Through an immersive aural experience\, the work proposes a form of embodied self-perception\, in which listening is no longer external to the sound object but coincides with it: the audience does not listen to the plate\, but listens as the plate\, temporarily adopting its vibrational perspective. \nAbout the artists\nRaul Masu (1992) is Professor of Electroacoustic and Multimedia Composition at the Conservatories of Trento (Italy). He holds a PhD in Digital Media from Universidade Nova de Lisboa and adjunct faculty in Computational Media and Arts\, Hong Kong University of Science and Technology Guangzhou (China). His compositional practice includes works\, presented in festivals\, conferences\, concerts\, and performances in 10 counties. He has published approximately 70 papers in international venues in the fields of electronic music (NIME\, TISMIR\, Organised Sound\, Audio Mostly\, Sound and Music Computing) and interactive technologies (CHI\, DIS\, TEI). \nFrancesco Ardan dal Ri began his musical career as an electric guitarist and thereminist and continues to collaborate with artists on both regional and international scenes\, working in live performance contexts as well as in recording studios. Over time\, his interests have progressively shifted toward contemporary and experimental music\, with a particular focus on the creative possibilities offered by software-based systems and electronic instruments\, both commercial and self-designed. He earned degrees Electronic Music from the Conservatory of Trento with top marks. This trajectory led him to pursue a PhD at the Department of Information Engineering and Computer Science (DISI)\, University of Trento under the supervision of Prof. Nicola Conci\, focusing on artificial intelligence and deep learning applied to audio signals. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/listening-room-1-2/
LOCATION:Hamburg University of Technology\, Building A (A 0.18)\, Am Schwarzenberg-Campus 1\, Hamburg\, 21073\, Germany
CATEGORIES:12-05,Listening Room,Music
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260512T133000
DTEND;TZID=Europe/Amsterdam:20260512T150000
DTSTAMP:20260505T121343
CREATED:20260421T165721Z
LAST-MODIFIED:20260504T084314Z
UID:10000176-1778592600-1778598000@icmc2026.ligeti-zentrum.de
SUMMARY:Lunch Concert 2A
DESCRIPTION:The second lunch concert of ICMC HAMBURG 2026 takes listeners on a journey through different cultures and technological approaches. The focus is on transformation: how are traditional instruments\, natural sounds\, or even everyday noises reinterpreted through the lens of computer technology and artificial intelligence?\nThe international composers are once again partly supported by Hamburg’s Ensemble 404\, which bridges the gap between academic composition and vibrant performance. \nThis Lunch Concert is open to the public. Those without a conference pass can purchase a ticket here. \n  \nProgram Overview\nSprinkle\nHuixin Xue \nLate Shift \nBenjamin Broening \nFall and Rise\nWan Heo \nSqueakeasy \nJonathan Wilson \nI dreamed of Naïma \nChristopher Dobrian and Aiyun Huang \nFree-Wheelerish (a movement from the suite Things Ain’t What They Used To Be)\nMark Whitlam \nAIKYAM\nClaudia Robles Angel \npORCELAIN\nDave O Mahony \n  \nAbout the pieces & composers\nHuixin Xue: Sprinkle\nThis piece seeks to explore new timbres and performance techniques for the pipa\, aiming to integrate the language of electronic music with the instrument’s sound in order to present a novel acoustic effect.\nThe pipa uses an unusual strings A #D E #G. \nAbout the artists\nComposer: Huixin Xue\nPipa Performer: Yinghan Liu\nComputer Music Designer: Shihong Ren \nHuixin Xue is a Chinese composer\, music producer and Music AI researcher. She is a Ph.D. candidate in Music AI at Shanghai Conservatory of Music\, an exchange student at the Hamburg University of Music and Theatre. She graduated from the Music Engineering Department of Shanghai Conservatory of Music both for her bachelor’s and master’s degrees.\nHer pieces won numerous awards\, including The Honorable Mention of the 2024 Sound Chain International Electronic Music Composition Competition (the only Chinese winner among the 6 winners worldwide). Her work was presented at the 2025 ICMC. Her pieces have been performed at major festivals. She also has participated in over twenty commercial music creation projects.\nDuring her doctoral studies\, she participated in the development of the AI Music Therapy Pod at the Shanghai Conservatory of Music\, co-developed SongEval\, the first aesthetic evaluation dataset for AI-generated songs\, and contributed to organizing the Automatic Song Aesthetic Evaluation Challenge at ICASSP 2026. \n  \nBenjamin Broening: Late Shift\nLate Shift explores the liminal light of dusk as shadows lengthen\, the bright colors of day darken\, and the familiar world is gradually transformed. A comparable transformation takes place in Late Shift: the flute and electronics slowly descend to lower registers over the course of the piece as flute sounds are gradually replaced by whispering percussion sounds in the electronics. \nAbout the artist\nBenjamin Broening’s music has been called “adventurous\, thoughtful\, eloquent\, and disarmingly direct.” His orchestral\, choral\, chamber and electroacoustic music has been performed in over twenty-five countries and across the United States by many soloists and ensembles. \nBroening is recipient of Guggenheim\, Howard and Fulbright Fellowships\, and has also received recognition and awards from the American Composers Forum\, Virginia Commission for the Arts\, ACS/Andrew Mellon Foundation\, the Jerome Foundation and the Presser Music Foundation among others. \nTrembling Air\, a Bridge Records release of his chamber music recorded by Eighth Blackbird\, has been praised as “haunting” and “enchanting” (Cleveland Plain Dealer)\, “magical” (Fanfare)\, “other-worldly” (Gramophone)\, and “coruscatingly gorgeous” (CD Hotlist). Critics have called Recombinant Nocturnes\, a disk of music for piano recorded by Duo Runedako “ breathtaking” (World Music Report) and “deep\, troubling” (François Couture). Nineteen other pieces have been released by Ensemble U: in Estonia and on the Centaur\, Everglade\, Equilibrium\, MIT Press\, Oberlin Music\, Open G\, Métier\, New Focus\, Ravello and SEAMUS record labels. \nBroening is founder and artistic director of Third Practice\, an annual festival of electroacoustic music at the University of Richmond\, where he is Professor of Music. He holds degrees from the University of Michigan\, Cambridge University\, Yale University\, and Wesleyan University. \n  \nWan Heo: Fall and Rise\nFall and Rise is the second episode of my previous solo cello piece\, When It Falls. Drawing from the same inspiration\, which was the fallen leaves on the ground at Jeolmul Forest in Jeju Island\, Korea\, with a variety of colors and shapes\, this version for amplified violine and electronics focuses more on the timbre of the instrument. Particularly\, transitions between normal to harmonics\, different fingerings\, and how they create different textures and sonorities. \nRecording of When It Falls and field recordings from Jeolmul Forest were processed using modular synthesis\, creating certain atmosphere to the piece. Pitch and rhythmic materials for the violin was extracted from spectral analysis of the recordings which gives the sonic coherence to the three different sound sources. \nAbout the artist\nWan Heo is a Korean-born composer based in Chicago. Her works have been performed internationally in South Korea\, Germany\, Italy\, Singapore\, Spain\, and throughout the United States. Her percussion solo Unveiled Future is published by Alfonce Production. \nWan’s music has been commissioned and featured by Darmstädter Ferienkurse\, SEAMUS\, Yarn/Wire\, VIPA\, among others. She received an Honorable Mention for the Christine Clark/Theodore Front Prize in the IAWM New Music Search. \nHer doctoral dissertation explores the vulnerability of South Korea’s sonic environments through field recordings made at Buddhist mountain monasteries. Works from this project have been presented at NYCEMF\, the Composition in Asia Conference\, and NSEME. \nWan is a Visiting Assistant Professor at Wake Forest University. She holds a B.M. in Composition from Ewha Womans University and an M.M. in Composition from Florida State University. She is currently ABD in the Ph.D. program in Composition and Music Technology at Northwestern University\, where she works under the guidance of Alex Mincek\, Stephan Moore\, and Jay Alan Yim. \n  \nJonathan Wilson: Squeakeasy \nSqueakeasy was written for Maja Cerar during the COVID-19 pandemic from late 2020 to the early summer of 2021. The composition was conceived from my accidental discovery of a metallic chair that was loosely bolted to a metal patio set and could pivot in such a way to create an ear-piercing\, yet irresistible screech. The timbral qualities of that chair intrigued the composer to determine the various sonic transformations that could be realized after recording that initial sound\, which quickly led to pairing the electronics with the violin because of the multimbral similarities observed between them. Additional recordings of squeaky wooden surfaces\, such as a wooden chair and floorboards\, were included to enhance the timbral relationships between violin and electronics. The composer’s decision to explore their timbral relationships was partly inspired by Denis Smalley’s “Base Metals” by relating metal-based and wood-based sound families from the electronics to different violin timbres or extended techniques such as col legno\, glissando\, tremolo\, pizzicato\, ricochet\, and natural and artificial harmonics. The structure of this composition alternates between sections with performer + electronics and cadenzas with amplified violin\, which could be loosely described overall as a concertino for amplified violin based on the virtuosic elements of the violinist’s performance. The sound of the violin is amplified throughout the work by the electronic performer’s patch that was programmed on Max/MSP. The performer of the electronics triggers each instance of fixed media from the laptop while the performer follows both the score and a counter/timer that is displayed on a separate computer monitor. \nAbout the artist\nDr. Jonathan Wilson’s works have been performed at the Ann Arbor Film Festival\, European Media Art Festival\, ICMC\, SICMF\, SEAMUS\, NYCEMF\, MUSELAB\, NSEME\, Napoleon Electronic Music Festival\, Iowa Music Teachers Association State Conference\, and Midwest Composers Symposium. He is the winner of the 2014 Iowa Music Teachers Association Composition Competition. Jonathan has studied composition with Lawrence Fritts\, Josh Levine\, David Gompper\, James Romig\, James Caldwell\, Paul Paccione\, and John Cooper. In addition\, studies in conducting have been taken under Richard Hughey and Mike Fansler. Jonathan is a member of Society of Composers\, Inc.\, SEAMUS\, ICMA\, and the Iowa Composers Forum. \n  \nChristopher Dobrian and Aiyun Huang: I dreamed of Naïma\nI Dreamed of Naïma for vibraphone and interactive computer system references a composition by John Coltrane in fragmented and distorted fashion\, as if recollected in a dream. The computer program\, written in Max for Live\, senses the sound of the vibraphone\, and algorithmically adds its own sounds to extend and elaborate the instrumental sound. The 7-minute piece mixes composition and improvisation\, with the computer performing interactively and responsively (with no attending technician needed)\, such that each performance is unique. \nAbout the artists\nChristopher Dobrian is Professor Emeritus of Integrated Composition\, Improvisation\, and Technology in the Department of Music\, with a joint appointment in the Department of Informatics\, at the University of California\, Irvine. He is a composer of instrumental and electronic music\, and taught courses in composition\, theory\, and computer music. He conducts research on the development of artificially intelligent interactive computer systems for the cognition\, composition\, and improvisation of music. He has published technical and theoretical articles on interactive computer music\, and is the author of the original reference documentation and tutorials for the Max\, MSP\, and Jitter programming environments by Cycling ’74. He holds a Ph.D. in Composition from the University of California\, San Diego\, where he studied composition with Joji Yuasa\, Robert Erickson\, Morton Feldman\, and Bernard Rands\, computer music with F. Richard Moore and George Lewis\, and classical guitar with the Spanish masters Celin and Pepe Romero. Dobrian has been an invited Fulbright specialist at the Korean National University of Arts\, the University of Paris-Sorbonne\, McGill University in Montreal\, and the Accademia Chigiana in Siena\, and has been a guest professor at Yonsei University\, Taiwan National Normal University\, University of Paris 8\, and the National University of Quilmes in Argentina. \nAcclaimed percussionist Aiyun Huang has performed with leading orchestras and at major international festivals worldwide\, premiering works by contemporary composers. Her research explores the performing body across music\, dance\, theatre\, and media technology\, and she directs the TaPIR Lab at the University of Toronto\, where she is Professor of Music. She founded the biennial Transplanted Roots percussion symposium\, has served as a juror and keynote speaker at prestigious events globally. Born in Taiwan\, Aiyun was named a Fellow of the Royal Society of Canada in 2024. \n  \nMark Whitlam: Free-Wheelerish (a movement from the suite Things Ain’t What They Used To Be)\nThe movement from a longer suite—titled in reference to Duke Ellington’s big band jazz classic\, released over sixty years ago—offers a gentle provocation\, contrasting traditional approaches to jazz improvisation with emerging paradigms in human–AI interaction. Combining real-time machine learning and deep learning tools\, the piece stages a live collaboration between improvising human musicians and generative AI agents. Central to the work is a subversion of the established technique of the contrafact\, whereby new melodies are composed over pre-existing chord progressions. Here\, the process is inverted: AI agents are tasked with reharmonising composed melodic lines\, thereby disrupting the expected harmonic framework. This indeterminacy both encourages and challenges the performers to find new musical responses. \nLeveraging technologies including Somax2\, RAVE\, Mosaïque\, and Google MediaPipe within MaxMSP\, the system enables algorithmic agents to act as both collaborative and disruptive partners in the performance loop. These agents generate unexpected musical gestures and offer novel\, interactive visual and audible modalities that stimulate and provoke the performers. The result is an evolving musical language that emerges from the entangled dynamics of this extended network of human and machine improvisers. \nAbout the artist\nMark Whitlam has been a professional musician for 25 years\, having toured internationally with UK jazz luminaries including Andy Sheppard\, Iain Ballamy and Jason Rebello (Sting) and Mercury Prize Nominee Eliza Carthy. Recent collaborations have included work with Adrian Utley (Portishead) and Will Gregory (Goldfrapp). He has also collaborated with Mercury Prize His compositions and performances have received airplay on BBC radio 2\, 3 \,6 and Jazz FM\, with TV credits including HBO’s miniseries Industry. Mark teaches in the UK at Bath Spa University and BIMM University\, where he is a senior lecturer. He is mid-stage in his PhD in Composition at the University of Bristol\, UK\, exploring the affordances offered by generative AI agents in the liminal space between composition and improvisation. He also has a keen interest in the links between actor network theory and 4E cognition in the space of human-AI mediated music-making. \n\nClaudia Robles Angel: AIKYAM \nAIKYAM is a real-time surround sound work for 1 performer and 5 to 6 participants (audience) inspired by Kuramoto’s mathematical model of the spontaneous order or synchronisation system in nature\, e.g. fireflies\, heart rates or humans clapping their hands together. The term AIKYAM is based on the Sanskrit word: ऐक्यम\, and it means unity or harmony. \nAbout the artist\nBorn in Bogotá (Colombia)\, living in Cologne (Germany). Composer\, sound and new media artist\, her work covers different aspects of visual and sound art\, extending from acousmatic and audio-visual compositions to interactive performances/installations using biomedical signals and AI (Artificial Intelligence).\nShe has been Artist-in-residence in several outstanding institutions around the globe. In 2022 was awarded with an honorary mention by the GIGA Hertz award at ZKM Center.\nHer work has been performed and exhibited worldwide e.g. at ZKM\, ISEA; KIBLA Centre Maribor\, CAMP Festival – 55 Venice Biennale Salon Suisse\, ICMC; New York City Electroacoustic Music Festival; NIME; STEIM; Harvestworks Digital Arts Center NYC\, Heroines of Sound Berlin; Audio Art Festival Cracow; MADATAC Madrid; Athens Digital Art Festival ADAF\, CMMAS Morelia; Beast FEaST Birmingham; ICST ZHdK Zurich; RE:SOUND Aalborg; Electric Spring Festival Huddersfield; AI Biennal Essen; at the Centre for International Light Art Unna and more recently at Acht Brücken Festival Cologne and at the Philharmonie Essen. \nwww.claudearobles.de \n  \nDave O Mahony: pORCELAIN\nAn audio and video representation for the feeling when ones bones rub together. \nAbout the artist\nDave O Mahony is a PhD graduate of the University of Limerick\, Ireland. His compositions have been performed at the Sines & Squares Festival (Manchester\, UK) both 2014 and 2016\, The Hilltown New Music Festival (Ireland)\, at the Daghda Gravity & Grace Festival (Limerick\, Ireland)\, as part of the Society of Electro Acoustic Music United States conferences 2018 and 2019 (Eugene Or. & Boston Ma.)\, the 2018 New York Electro Acoustic Music Festival\, the the International Computer Music Conference (I.C.M.C.)/ New York Electro Acoustic Music Festival joint 2019 event (both in New York\, NY.)\, the 2018 and 2019 Electroacoustic Barn Dance (Jacksonville\, FL)\, the 2020\, 2021 and 2022 Earth Day Art Model online festivals\, the 2021 New Music Gathering online conference\, the Radiophrenia online event (2022) and the 2020/21 I.C.M.A. conference. He is a member of the Irish Sound Science and Technology Association (ISSTA)\, S.E.A.M.U.S. the I.C.M.A. and has an interest in manipulating modular synthesizers with brainwaves. He holds a Doctorate in Composition in Music Technology\, a BA in English and New Media (Hons) and an MA in Music Technology (Hons) from the University of Limerick\, Ireland. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/lunch-concert-2a/
LOCATION:Hamburg University of Technology\, Building I\, Audimax 2\, Denickestraße 22\, Hamburg\, 21073\, Germany
CATEGORIES:12-05,Concert,Music
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260512T190000
DTEND;TZID=Europe/Amsterdam:20260512T210000
DTSTAMP:20260505T121343
CREATED:20260421T170217Z
LAST-MODIFIED:20260504T073629Z
UID:10000219-1778612400-1778619600@icmc2026.ligeti-zentrum.de
SUMMARY:Evening Concert 2B
DESCRIPTION:This Evening Concert promises a special experience for both eyes and ears. At the center of this session is the saxophone\, performed by one of the most distinguished artists of our time: Hamburg-based saxophonist Asya Fateyeva. Together with her talented students\, she presents five works specially conceived for her and her instruments.\nThis instrumental focus is complemented by two striking video works\, presented on the specially installed video wall in the FEH\, which dissolve the boundaries between sonic and visual space. \nThis Evening Concert is open to the public. Those without a conference pass can purchase a ticket here. \n  \nProgram Overview\nAdaptive_Study#06 – Symbolic Structures Enhanced\nRiccardo Dapelo \nExpandiere \nChing Lam Chung \nSilent “human bird language” \nYongbing Dai and Yiping Bai \nPoetic Encounter with the digital shadow\nNicolas Kummert \nJamshid Jam \nJean-Francois Charles and Ramin Roshandel \npan:individual\, “come closer\, come home”\nFranz Danksagmüller \n  \nAbout the pieces & the artists\nRiccardo Dapelo : Adaptive_Study#06 – Symbolic Structures Enhanced\nAdaptive_Study#06 – Symbolic Structures Enhanced (2025) is the sixth work in a series of compositional studies initiated in 2015\, exploring musical forms that are not temporally fixed. The piece investigates adaptive processes based on symbolic structures\, short-term memory\, and performer–system interaction. The live electronics analyses performer input and generate responses through the transformation and recombination of symbolic data. Rather than functioning as an autonomous generator\, the system acts as a responsive partner\, shaping musical form through evolving interactions over time. Control of event density at micro and macro-structural levels plays a central role\, preventing entropic saturation. The work is conceived as an open study\, in which form emerges through the negotiation between performer and system\, maintaining stylistic coherence while allowing variability across performances. \nAbout the artist\nRiccardo Dapelo (b. 1962) studied composition with G. Manzoni and A. Vidolin. His work focuses on acoustic and electronic composition\, live electronics\, and interactive systems\, and has been performed internationally. He has published articles and lectured on voice analysis\, spatialisation\, philosophy of art\, and musical time. He collaborates with visual artists on interactive works and sound installations for museum and exhibition spaces. He teaches Composition at the Conservatory of Piacenza. \n  \nChing Lam Chung: Expandiere\nThis piece explores the different sound qualities of the baritone saxophone—from pitched materials to mechanical sounds—and its interaction with electronics\, thereby investigating the sonic hybridity between the instrument and electronic media. Both tape and live electronics are used: the fixed electronics allow sound objects to be precisely organized within the spatial environment\, while the live electronics serve as a bridge between the instrument and the fixed electronics\, enhancing their connections. \nThrough this approach\, the piece creates a unique sonic environment in which different sound objects interact and evolve with one another\, offering the audience a varied auditory experience in which the instrument and electronics fully merge. \nAbout the artist\nCHUNG Ching Lam\, Mavis (b.2003)\, was born and raised in Hong Kong. Mavis currently studies Master music composition at Frankfurt University of Music and Performing Arts\, under the guidance of Orm Finnendahl and Ulrich Alexander Kreppein.\nMavis’s music thoughtfully explores timbre\, transforming ordinary sounds into unexpected auditory experiences. Her compositions discover the beauty of melancholy as she creates a unique sonic landscape that reflects her philosophy and experiences.\nShe received third prize in the 2nd NC Wong Young Composers Award and was chosen for the electroacoustic composition fellowship at the Delian Academy 2024. She also participated in the URTIcanti contemporary music festival and the Internationales Digitalkunst Festival. Furthermore\, she attended the South China Contemporary Creative Music Institute and has been selected for the Mixed Media category at the iISUONO Contemporary Music Week 2025. Her compositions have been performed in Greece\, Germany\, and Italy.\nShe studies Bachelor music composition at Hong Kong Baptist University\, under the guidance of Eugene Birman\, Camilo Mendez\, Stylianos Dimou and Ka Shu TAM. \n  \nYongbing Dai and Yiping Bai: Silent “human bird language” \nThis work\, composed for saxophone and electronic music\, uses the saxophone’s unique multiphonic harmonics\, distinctive timbre\, and various techniques such as tonguing to evoke an effect of ancient human “bird language\,” akin to “abstract writing” incomprehensible to modern humans. It uses this to question the constant self-destruction that occurs on our shared planet. We can consider this: we have entered the age of artificial intelligence\, with highly advanced science and technology. Yet\, even in this civilized context\, for their own benefit\, humans can disregard and kill their fellow human beings. This is utterly absurd and tragic. How is this different from the barbaric slaughter of ancient times? What is the significance of the development of human technology and civilization? \nAbout the artists\nDai Yongbing holds a doctorate in Electronic Music Composition from the Shanghai Conservatory of Music. He currently teaches electronic music at the Art and Technology Department of the Composition Department of Wuhan Conservatory of Music. He was sponsored to study composition and electronic music composition at the Royal Danish Academy of Music\, where he received a master’s degree in composition. In 2023\, he studied sound art at the University of Music and Drama in Munich\, Germany.In 2024\, he was sponsored by the European Union’s Erasmus program to study electronic music composition with Professor Karlheinz Essl at the University of Music and Drama in Vienna\, Austria. The electronic music work “Two Trembling Hearts” won the first prize at the Hangzhou International Electronic Music Festival. In June 2022\, he was selected for the academic class of computer music design and performance at the IRCAM-Manni-festival Music Festival at Pompidou in Paris\, France. His work “Two Worlds of Monks” won the first prize in the UPI-Sketch professional group at the 2022 Xenakis (CIX) Music Center in France. His wind band work “Non-Taoism” was premiered by the Shenzhen Symphony Orchestra. His works have been performed all over the world\, such as Munich and Düsseldorf in Germany\, Amsterdam in the Netherlands\, Vienna in Austria\, Lisbon in Portugal\, Copenhagen in Denmark\, New York in the United States\, Tokyo in Japan\, Seoul in South Korea. \n  \nNicolas Kummert: Poetic Encounter with the digital shadow\nThis proposal invites saxophonist Asya Fateyeva into an improvisatory performance that explores the encounter between acoustic virtuosity and real-time electronic transformation. The project centres on a live-electronics setup I have developed within artistic research contexts over several years—a system deliberately designed to be simple\, flexible\, affordable\, and fast to deploy. It requires only a close microphone (ideally the Vigamusictools Intramic)\, a small audio interface\, a laptop\, and three compact controllers. Its purpose is not to impose effects but to extend the sonic and expressive possibilities of the acoustic instrument while remaining transparent and highly responsive.\nThe concept is straightforward: the saxophone produces the primary musical material\, and I modulate that sound live through controlled timbral\, spectral\, and temporal transformations. The electronics behave as a reactive partner—what I call the performer’s digital shadow: a sonic counterpart that follows\, shapes\, questions\, or briefly detaches from the acoustic gesture. The identity of the acoustic sound remains fully audible\, while the electronic layer opens new directions within the improvisation.\nThe artistic foundations of this work draw on several research frameworks:\n• Improvisation as assemblage (after Deleuze): the performance is approached as a self-emergent system in which performers\, instruments\, digital processes\, acoustics\, and feedback relations act together to shape the form in real time. • Paulo de Assis’s Logic of Experimentation: the focus lies on what the instrument–electronics constellation can do when activated through exploratory performance\, rather than on pre-defined material. • Georgina Born’s theory of musical mediation: the setup foregrounds the interplay between acoustic sound\, digital transformation\, performer interaction\, and audience perception. • Laurent Cugny’s audiotactile perspective: the electronic layer functions as an extension of touch\, gesture\, and micro-timing rather than an external effect. The project treats improvisation as a co-embodied process that produces a hybrid sonic entity.\nMusically\, the performance is structured as a series of improvisatory episodes that examine different modes of relationship between acoustic and transformed sound: – subtle extensions of timbre and resonance; – interactive textures and rhythmical counterpoints between acoustic phrasing and electronic responses – sections where Asya’s sound is heavily transformed in real time\, while the unprocessed acoustic sound is replayed in the pauses of her playing\, blurring the audience’s visual-aural connection\, and questioning the musician’s immediate relationship to her own instrument.\nBecause the system is lightweight and adaptable\, the collaboration requires limited rehearsal and can be shaped around Asya’s musical language and preferred improvisational strategies. The format proposes an accessible but conceptually rigorous exploration of improvisation\, mediation\, and electronic augmentation. It offers the conference audience an accessible example of how simple\, flexible computer-music tools can generate rich musical dialogues and expand the expressive ecology of the acoustic instrument\, shedding new light on various aspects of improvisation.\nI propose to conclude the performance with a short discussion in which Asya can reflect on how the electronic shadow influenced musical decision-making\, interaction\, and perception—offering insight into the core research questions driving this work. \nAbout the artist\nNicolas Kummert (1979) is a Belgian saxophonist\, electronic artist\, composer and researcher known for his melodic sense\, openness and exploratory approach. He has recorded over 70 albums and performed worldwide with artists such as Lionel Loueke\, Jeff Ballard\, DRIFTER and many others. Active in hybrid acoustic–electronic projects\, film and dance music\, and interdisciplinary research\, he develops innovative modulation processes and collaborates across jazz\, poetry\, contemporary dance and African music. \n  \nJoe Wright: Cor Ddiglwed (Unhearing Chorus) \nCor Ddiglwed (unhearing chorus) takes inspiration from Daphne Oram’s ‘Bird of Parallax’\, and was developed with the one-of-a-kind\, Mini Oramics\, developed by Tom Richards based on Oram’s designs for a revised version of her pioneering graphical synthesis machine. \nIn the piece\, the author phrases/samples recorded with Oramics\, alongside field recordings taken locally to his home in South Wales and live processed saxophone which uses the instrument as input to a phase vocoder designed to mimic the writing / replaying / overwriting process that Mini Oramics facilitates. \nThe piece was written in the context of a highly divisive by-election in which local communities in South Wales saw a hot rise in populist sentiment\, and a rise in polarised rhetoric on and offline. While the technical inception of the piece draws heavily on Oram and the legacy of her synthesiser design\, the field recording process at this time highlighted the importance shapes and forms in captured human and animal voices – seen through an Oramics lens. The piece explores the idea of diverse clashing narrative threads in a fight for attention – as a metaphorical mirror to the author’s recordings of local dawn choruses. Both in the piece and the context of its composition\, these voices are\, despite their differences\, interconnected by common challenges and under-explored common ground\, yet are broadly unheard by others. \nThe piece forms part of a broader body of recent work that explores Oramics in the context of Oram and Iannis Xenakis’ work\, and the ways that their thinking and legacy can apply to contemporary musical composition\, instrument design\, and accessible musical tools and resources. \nAbout the artist\nJoe Wright is a musician and maker based in Cardiff\, with an interest in collaborative music making\, field recording\, accessible music technology/practice\, and creative code. As a saxophonist\, Joe is currently playing across the UK and Europe with jazz/contemporary music groups led by Rob Luft\, Corrie Dick\, and in FORJ. He also has a long-standing collaboration – Onin – with experimental musician\, James L Malone that explores unstable systems and atypical interactions. Recently\, Joe has been exploring field recording with a focus on his local natural spaces in South Wales. \n  \nCecilia Suhr: Resonant Thresholds\nResonant Thresholds explores the liminal space between human expression and technologically mediated sound. Structured around a fixed audio score\, the work unfolds as a slowly transforming audiovisual environment in which live violin performance interacts with real-time electronic processing. Noise\, resonance\, and breath-like textures blur distinctions between acoustic intimacy and digital vastness\, allowing the materiality of sound to become porous and unstable. Through structured live comprovisation (composed improvisation)\, the performer actively shapes the unfolding sonic landscape\, while the processed audio simultaneously generates an evolving visual score that functions as a symbolic translation of sound. The work invites listeners to inhabit a threshold between perception and imagination\, where meaning emerges through the continuous negotiation between composed structure\, live performance\, and technological extension. \nAbout the artist\nCecilia Suhr is an award-winning intermedia artist\, multimedia composer\, researcher\, author\, and multi-instrumentalist (violin\, cello\, voice\, piano\, bamboo flute). Her honors include the Pauline Oliveros Award (IAWM)\, a MacArthur Foundation DML Grant\, the American Prize (Honorable Mention)\, Global Music Awards\, Best of Competition from BEA\, among other distinctions. Her work has been presented at ICMC\, SEAMUS\, NYCEMF\, EMM\, SCI\, ACMC\, Mise-En\, MoXsonic\, and many more. She is a Full Professor at Miami University Regionals. \n  \nJean-Francois Charles and Ramin Roshandel: Jamshid Jam \nThe sonic dust of a country that has been burned to the ground several times over the centuries and yet has formed some of the most elaborate and highly sophisticated musical structures to have ever existed. According to Persian myths\, Jamshid\, who ruled during several centuries\, was responsible for inventions ranging from the manufacturing of weapons to the mining of jewels to the making of wine. He is also credited with the discovery of music. This is what brought the Jamshid Jam duet together: the search for music at the crossroads of the Radif tradition (Persian classical music) and the development of musical instruments such as the turntable and live electronics. \nAbout the artists\nRamin Roshandel grew up in a family surrounded by artists; his luthier dad\, his painter uncle\, and his setar instructor Farshid Jam had strong influences on him as a teenager. Ramin worked with the renowned Mohammad Reza Lotfi at Maktab-Khāne-ye Mirzā Abdollāh and won second place in the 7th National Youth Music Festival in Tehran\, Iran. As a composer\, Ramin Roshandel works with improvisatory structures to contrast or converge with non-tonal forms. \nJean-François Charles is Associate Professor of Composition and Digital Media at the University of Iowa. He creates at the crossroads of music and technology. As a clarinetist\, he has performed improvised music with artists ranging from Douglas Ewart to Gozo Yoshimasu. He worked with Karlheinz Stockhausen for the world premiere of Rechter Augenbrauentanz.\nRamin Roshandel & Jean-François Charles have worked on several projects together. Roshandel was the setār soloist for the premiere performances of Charles’ opera Grant Wood in Paris in 2019. They performed together as part of the live soundtrack composed by Charles and Nicolas Sidoroff to the 1923 Hunchback of Notre-Dame movie\, a commission by FilmScene with premiere performances in November 2023 in Iowa. In 2025\, they composed and performed a series of 13 concerts with the Red Cedar Chamber Music ensemble. \n  \nFranz Danksagmüller : pan:individual\, “come closer\, come home”\npan:individual is a participatory work for organ\, live electronics\, mobile phones\, and audience that explores how individual perception\, agency\, and identity are transformed within digitally mediated collective systems. The piece examines how contemporary technologies shape experiences of belonging\, guidance\, and participation\, blurring the boundary between individual action and collective behavior.\nThe performance unfolds as a distributed audiovisual environment in which audience members access individualized video streams on their mobile phones. Initially fragmented and asynchronous\, these streams gradually align\, forming a shared sonic and visual field. The organ part follows algorithmic and process-based instructions that guide constrained improvisation\, functioning not as an authoritative voice but as one element within a larger collective texture shaped by live electronics.\nAs the performance progresses\, audience members are invited to participate vocally\, tuning into sustained pitches suggested by the audiovisual environment. Digital avatars address participants directly\, encouraging alignment and proximity and culminating in the formation of a collective sonic organism.\nRather than presenting a narrative or explicit critique\, pan:individual creates an experiential situation in which participants are invited to reflect on how digitally mediated systems influence collective identity\, agency\, and the desire to belong. \nAbout the artist\nFranz Danksagmüller (*1969) is an Austrian organist\, composer\, and media artist working at the intersection of instrumental performance\, live electronics\, rule-based improvisation\, and participatory systems. He is Professor for Organ and Improvisation at the University of Music in Lübeck and currently Visiting Professor at the Royal Academy of Music\, London. His works include music theatre\, ensemble and vocal works\, and participatory projects integrating digital technologies and audience interaction\, presented internationally across Europe\, Asia\, Africa\, and North America. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/concert-2b/
LOCATION:Friedrich-Ebert-Halle\, Alter Postweg 34\, Hamburg\, 21075\, Germany
CATEGORIES:12-05,Concert,Music
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260512T213000
DTEND;TZID=Europe/Amsterdam:20260512T233000
DTSTAMP:20260505T121343
CREATED:20260421T150351Z
LAST-MODIFIED:20260423T123138Z
UID:10000068-1778621400-1778628600@icmc2026.ligeti-zentrum.de
SUMMARY:Club Concert 2C
DESCRIPTION:Club Concert 2C invites you to an extraordinary sonic experience in the state-of-the-art Production Lab of the ligeti center. On a specialized 20.8-channel system\, international artists unfold immersive sound worlds ranging from physical gesture to complex AI analysis.\nExperience the synergy of historical depth and futuristic technology—an evening in which the audience quite literally immerses itself in sound. \n  \nProgram Overview\n\n\nDinosaur\, Glitched! \nFernando Lopez-Lezcano \nFause\, Fause\nJules Rawlinson \nLive ‘Shō’ Coding – Algorithmic Improvisation of Aitake Harmonies\nAtsushi Tadokoro \nQuiet Catastrophe Unleashed\nNicola Casetta \nAgain\nJulian Green \nPercepts (excerpt)\nDoron Klant Sadja \nCosmologies 3\nAaron Einbond \n\n\n  \nAbout the pieces & the artists\nFernando Lopez-Lezcano: Dinosaur\, Glitched!  \nThis is another ditty to add to the Dinosaur Songbook\, a music composition and performance project that started when the COVID pandemic kick-started a round of modular synthesizer building. This was a return to my roots\, as I started my discovery of electronic sound by designing and building modular synths from scratch in the late 70’s and early 80’s. \n“Carlitos” is the small Eurorack synth filled with modular goodies that will be used in this performance. It will be helped\, as has become the norm\, by the miniature Kastle\, probably the best birthday present ever\, and the smallest dinosaur I have in my herd. Carlitos houses an eclectic mix of analog\, digital and hybrid modules that has been evolving over several years and many concerts. \nThis round of noises comes courtesy of continued experiments coding in the Droid voltage processor computer language. One addition has been an implementation of Rob Hordijk’s Rungler circuit. This is a “low frequency” Rungler as the Droid is not fast enough to process voltages at audio rates\, and while it will never sound like the original\, it does provide a never-ending cornucopia of chaotic behaviors. As it is software\, many additional features were added\, in part to further confuse the performer who has even more knobs and controls to handle\, with the same brain power as before. Many other sources of sound make up the piece\, from complex oscillators with multiple feedback paths to fingers scratching a built-in microphone\, to an emulation of the Radio Music module with additional sampled voices. Various granular synthesis systems play a constant role in the sound universe of the piece. \nAs always all sounds are piped through a Linux computer running SooperLoopy\, a SuperCollider program written by the composer that spatializes sounds dynamically in realtime using HOA (High Order Ambisonics)\, and includes asynchronous loopers with a granular synthesis core that can sample\, replay and process more screaming dinosaur layers than you can count. \nAbout the artist\nFernando Lopez-Lezcano was given a choice of instruments when he was a kid and liked the piano best. His dad was an engineer and philosopher and his mother loved biology\, music and the arts. He studied both music and engineering\, and in his creative artistic work he tries to keep art and science chaotically balanced. He has been working at CCRMA since 1993 and throws computers\, software algorithms\, engineering and sound into a blender\, serving the result over many speakers. He can hack Linux for a living\, and sometimes he likes to pretend he can still play the piano. \nHe built El Dinosaurio (an analog modular synth) from scratch more than 40 years ago\, and it still sings its modular songs. He also loves to distill music from pure software and uses computer languages as scoring tools to carve music from text. He returned to realtime performances with an ever growing modular synthesizer herd\, including the original El Dinosaurio. He was the Edgard-Varèse Guest Professor at TU Berlin in 2008 and has been teaching the “Sound in Space” course at CCRMA for quite a while. He has also likes designing and building “things”\, including Ambisonics microphones (the SpHEAR project) and 3d sound diffusion spaces (the Listening Room and Stage systems at CCRMA\, and our “portable” GRAIL concert speaker array). \nHe feels happiest when playing music and making weird noises\, even better when playing with friends\, and even better on stage. \n  \nJules Rawlinson: Fause\, Fause\nFause\, Fause (c. 7mins) is one scene from an interactive audiovisual work that brings together different strands of creative computing\, sound design and composition. The work combines elements of game audio\, computer music\, traditional Scots folk song and highly detailed virtual landscapes to create an immersive songscape where the player traces the deconstructed ghosts of a song that features heavily processed fragments of the traditional ballad Fause\, Fause sung by Scottish music specialist Lori Watson. These fragments are dispersed throughout the virtual landscape using mixed approaches of fixed and indeterminate elements to create pathways of sound\, sound pathways as desire lines (Bandt 2006)\, encouraging exploration and reflection. The result is a series of speculative sonic narratives that re-sound space and place through what Hernandez (2017) describes as “psycho-sonic cartography”. The work reconsiders electroacoustic soundscape in an interactive medium\, bringing together compositional\, cultural and environmental considerations and makes use of creative applications of game-audio technologies for non-gaming purposes. The work will be performed by the composer across a multichannel audio system to highlight the spatial character and timbral qualities of the work. \nAbout the artist\nJules Rawlinson (1969) is an audio-visual composer and working in solo and collaborative settings\, and Programme Director for Sound Design at The University of Edinburgh Recent outputs make innovative use of archival material and corpus-based aesthetics of transformation across interactives\, performances and fixed media works. \n  \nAtsushi Tadokoro: Live ‘Shō’ Coding – Algorithmic Improvisation of Aitake Harmonies\n“Live ‘Shō’ Coding” is an experimental performance that merges the ancient tradition of Japanese Gagaku with contemporary live coding. The title is a play on the homophone between the Japanese instrument “shō” (笙) and the English word “Show.” This pun encapsulates the work’s core intent: to reveal the internal logic of a millennium-old instrument through the transparent medium of real-time programming. \nThe shō is a mouth organ consisting of seventeen bamboo pipes. Unlike Western instruments that often prioritize melody\, the shō is primarily harmonic\, characterized by “aitake” (合竹)—six-note tone clusters that function as static blocks of timbre. Originating from the Chinese “sheng” of the Tang Dynasty\, the Japanese shō has remained structurally unchanged for over 1\,200 years. It serves as a rare instance of “frozen” historical sound\, preserved by the rigid rituals of court music. \nTechnically\, the performance is realized through TidalCycles and SuperCollider. The sound is not pre-recorded but generated via real-time synthesis. Crucially\, the system employs Pythagorean tuning rather than modern equal temperament to replicate the instrument’s pure resonance and distinct intervals. Within this digital environment\, “aitake” clusters are defined as algorithmic patterns\, enabling the performer to improvise with ancient harmonies using computational precision. \nThe musical narrative follows an evolutionary arc from the archaic to the modern. The piece begins with a faithful algorithmic reconstruction of traditional Gagaku aesthetics—static\, sustained\, and serene. As the code evolves\, the strict definitions of the “aitake” are deconstructed through stochastic functions\, rhythmic displacements\, and spectral shifts. Consequently\, the organic textures of bamboo dissolve into digital artifacts\, transforming sacred harmony into abstract soundscapes. \nUltimately\, “Live ‘Shō’ Coding” challenges our perception of time. It juxtaposes the cyclic\, non-linear time of Gagaku with the discrete\, clock-based time of the CPU. By subjecting ancient sounds to modern syntax\, the work fosters a dialogue where the “breath of the phoenix” is reimagined through the binary logic of the machine. \nAbout the artist\nAtsushi Tadokoro\nHe is a live coder and creative coder exploring the boundaries of sound and visual art. He serves as an associate professor at Maebashi Institute of Technology and a part-time lecturer at Tokyo University of the Arts and Keio University. \nBorn in 1972\, he creates musical works through algorithmic sound synthesis and performs live improvisations with sound and visuals using a laptop. In recent years\, he has also produced and internationally exhibited numerous audio-visual installation works. \nHis work has been selected for major international conferences\, including the International Computer Music Conference (ICMC) in 2025\, 2024\, 2015\, and 1996; the International Conference on Live Coding (ICLC) in 2025\, 2024\, 2020\, 2019\, 2016\, and 2015; and New Interfaces for Musical Expression (NIME) in 2016. \nHe teaches various courses on creative coding at the university level. His lecture materials\, publicly available on his website (https://yoppa.org/)\, serve as a valuable resource for numerous students and creators. \nHe is the author of several books\, including Beyond Interaction: A Practical Guide to openFrameworks for Creative Coding (BNN\, 2020)\, Performative Programming: The Art and Practice of Live Coding – Show Us Your Screens (BNN\, 2018)\, and An Introduction to Creative Coding with Processing: Creative Expression Through Code (Gijutsu-Hyohron\, 2017). \n  \nNicola Casetta: Quiet Catastrophe Unleashed\nQuiet Catastrophe Unleashed is a performance for solo live electronics based on an eight- channel dynamic feedback system. Informed by Stephen Wolfram’s notion that simple iterative rules can generate irreducible complexity\, the work investigates how minimal operations— modulated delays\, adaptive limiting\, nonlinear distortion\, and continuously evolving chaotic equations—produce sonic forms that cannot be predicted or reduced to their initial conditions. The system is activated by a single impulse and evolves through recursive transformations that amplify micro-instabilities into shifting textures and emergent structures. These processes resonate with Deleuze’s conception of becoming: sound as a field of continuous variation rather than a fixed object. The performer navigates this unstable environment in real time\, engaging with a machine whose behavior unfolds at the intersection of determinism and contingency. Quiet Catastrophe Unleashed operates on the edge of chaos\, where sonic order arises through the continual negotiation of instability. \nAbout the artist\nNicola Casetta is a computer musician\, live electronics performer\, and scholar. His work explores sound as a network of relationships—a complex\, interconnected phenomenon that unfolds in an immersive and inclusive way. Through live electronics\, he creates music that captures the essence of the here and now\, embracing spontaneity and the vitality of the moment. He uses sound as a medium to investigate new ways of interacting with both the environment and society\, creating spaces for reflection and transformation. His music has been perfomed at To listen To in Tourin (IT)\, SAG in Leicester (UK)\, CNMAT (Berkeley)\, Angelica Festival Bologna\, Festiva di Nuova Consonanza Roma (IT)\, Borealis in Bergen (NO)\, Festival DME in Lisbon (PT)\, Festival Zeit fur Neue Musik in Rockhenhausen (DE)\, Manifeste Ircam in Paris\, Ma/In in Matera (IT)\, 8th FKL Symposium(IT) \, NYCEMF\, ICMC in Athens (GR)\, XX CIM in Rome (IT)\, SoundKitchen (UK)\, Sweet Thunder Festival of Electro-Acoustic Music in San Francisco (US)\, UCSD Music – CPMC Theathre in San Diego (US) and Premio Phonologia in Milan among others. \n  \nJulian Green: Again\nAgain is a live electroacoustic performance structured as a stream of consciousness\, in which repeated physical gestures function as both material and form. The performer cycles through a limited set of recurring actions intended to “cradle” a fleeting\, beautiful moment; over time\, this repetition shifts from preservation toward compulsion\, foregrounding the tension between holding on and letting go. These gestural loops accumulate and cross thresholds that trigger new sonic layers\, including processed vocal statements\, musical textures\, and environmental sound events. Rather than presenting discrete movements\, the work unfolds through gradual intensification and release\, emphasizing how replay can simultaneously comfort and erode\, as memory morphs with each return. \nIn the latter portion of the performance\, a recorded spoken message introduces an explicit reflective frame\, calling for interpersonal awareness of desire and a move away from reliance on possessions in recognition of life’s ephemerality. Again uses repetition as a performative engine to examine attachment\, impermanence\, and the unstable fidelity of remembrance. \nProgram Notes: \npast lives Again. Lost\, but love lingers lackadaisically through lumbering leaps within another. Foregone are the chains that bind our sense of reason towards another hopeful realization into an unresolved calling. Gone are the worries of the mind that haunts our humanity to bind to desires towards our sense of self\, compressed within a fragment of our lifespan. Only to one day meet the people we cherished deeply\, degrading our memories\, morphing in and out of consciousness within every trickle of sorrow that sheds our being before returning to our \nAbout the artist\nJulian Green is a U.S.-based electroacoustic composer and performer focused on data-driven instruments and live electronics. He has participated in Hypercube Ensemble’s Cubelab workshop\, with works performed and recorded in the U.S. and internationally\, including Sonic Apparitions (Duino\, Italy). Notable works include Sound Waits\, Cherish the Space\, My Festering Synapses\, An Indeterminate Schism\, and We Don’t Unknow. His piece The Inconsistent Continuities was professionally recorded for Hypercube Ensemble and commissioned for the Kingler Electroacoustic Residency (KEAR) at Bowling Green State University. Recent projects include Breakthroughs (Wacom tablet)\, Again (GameTrak controller)\, and If We Could Forget It Gently Together: Vestige Series (custom 3D-printed gyro controller)\, realized at the University of Oregon. Green holds a BM in composition from Arkansas State University and an MM from Bowling Green State University\, and is pursuing a doctorate at the University of Oregon. Influences include Denis Smalley\, Michel Chion\, Trevor Wishart\, Hildegard Westerkamp\, Ryuichi Sakamoto\, and Elaine Lillios. \n  \nAaron einbond: Cosmologies 3\nCosmologies 3 situates the listener inside a virtual grand piano to experience its secret inner life. The piano interior\, recorded with a spherical microphone array\, is complemented by three-dimensional (3-D) field recordings of Paris’s Place Igor Stravinsky. These recordings are highlighted and underlined with computer synthesis using artificial intelligence (AI) to reproduce the spatial presence of acoustic instruments\, while the microcosm of the piano’s inner space expands larger-than-life. \nCosmologies 3 is part of a modular series of works that use AI to inform sound spatialization. The situated spatial presence of musical instruments has been well studied in the fields of acoustics and music perception research\, but so far has not been the focus of research on AI and music. Cosmologies seeks to “re-embody” recorded sound using data derived from natural acoustic phenomena in an immersive sonic environment where real and virtual sources blend seamlessly. Cosmologies 3 for Ambisonic fixed media may be performed on its own or directly following Cosmologies for piano and 3-D electronics\, with the fixed media work beginning as the live performer leaves the stage. Although the human–AI interaction in the fixed work is no longer live\, it remains as a trace of the work’s creation process\, refracting the human performer’s presence behind the spatial audio recordings (see Fig. 1). \nCosmologies is among the first works to connect audio descriptor analysis and corpus-based syn- thesis to 3-D spatialization using Higher-Order Ambisonics (HOA) and machine learning (ML). At the same time\, it is the first project connecting the computer programs Max\, Python\, and OM# (Bresson et al. 2017) with the associated packages Spat (Carpentier 2018) and Mubu (Schnell et al. 2009). These software tools are used to draw upon natural acoustic phenomena as source material for spatial sound derived from two sources: one is a 3-D microphone array\, the EM32 Eigenmike by mh acoustics (https://mhacoustics.com/products)\, a 32-channel array used to capture 3-D piano samples as well as ambient field recordings. The other source is generative spatial sound synthesis produced through ML of an existing large database of radiation measurements for acoustic instruments (Shabtai et al. 2017; Weinzierl et al. 2017). This database serves as a training set for ML models to control spatially rich 3-D patterns for electronic synthesis. These two sources of spatial sound are intentionally overlapped and fused so the listener cannot easily distinguish or segregate the sources. The aesthetic goal is to create a setting for curious and detailed listening\, where one may not discern the “sleight of hand” between the superposed 3-D spaces of the sample recordings and computer synthesis. \nAbout the artist\nAaron Einbond’s work explores the intersection of instrumental music\, field recording\, sound installation\, and interactive technology. He released portrait albums Cosmologies with the Riot Ensemble\, Without Words with Ensemble Dal Niente\, and Cities with Yarn/Wire and Matilde Meireles. His awards include a Giga-Hertz Förderpreis\, a Guggenheim Fellowship\, and artistic-research residencies at IRCAM and ZKM. He teaches music composition and technology at City St George’s\, University of London. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/club-concert-2c/
LOCATION:ligeti center\, Production Lab (10th floor)\, Veritaskai 1\, Hamburg\, 21079\, Germany
CATEGORIES:12-05,Club Concert,Music
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260513T110000
DTEND;TZID=Europe/Amsterdam:20260513T173000
DTSTAMP:20260505T121343
CREATED:20260421T182305Z
LAST-MODIFIED:20260428T114812Z
UID:10000186-1778670000-1778693400@icmc2026.ligeti-zentrum.de
SUMMARY:Listening Room 2
DESCRIPTION:Fixed Media | Program Overview\nCrown Shyness\nJeonghun Hyun \nEntomology#2\nThanos Polymeneas-Liontiris \nJetlag – Time Difference\nRay Tsai \nLazy whirls of glow\nJuan J.G. Escudero \nPumma\nEmilio Casaburi \nQuivering Silk\nYi-Hsien Chen \nSonic Echoes of Ink\nPingting Xiao \nThe Luminosity of the Yugen Mist\nXiaoyu Su \nVocalise\nPak Hei Leung \n  \nAbout the pieces & artists\nJeonghun Hyun: Crown Shyness\nThis work is inspired by crown shyness\, a natural phenomenon in which the canopies (branches and leaves) of trees grow while maintaining a consistent distance without touching one another. Although individual trees exist separately\, they are perceived as a single forest when viewed from a distance. In this sense\, the piece musically explores the idea that individuals and others may feel divided due to conflict and discord\, yet from a broader perspective\, they form a harmonious whole. The diverse sounds used in the work reflect the characteristic behavior of tree canopies that avoid encroaching upon one another’s space. Each sound is therefore designed to occupy a distinct position within the stereo field\, maintaining its own spatial identity. Additionally\, just as a tree extends from thicker branches to progressively finer ones\, the sonic material evolves from dense\, large-scale textures into increasingly subdivided and delicate sounds. This spatial and morphological development metaphorically reveals both the independence of individual entities and their coexistence within a larger structural framework. Through these compositional strategies\, the work seeks to musically reflect on human relationships formed within a social context\, and to contemplate the sense of distance\, respect\, and attitudes of coexistence required within those relationships. \nAbout the artist\nJeonghun Hyun is a composer specializing in electronic music\, with a keen interest in the convergence of acoustic tradition and technological innovation. Having studied under Jinwoong Kim and Shinae Kang\, he explores the intersection of instrumental performance and digital sound processing. His works often employ custom programming and real-time sound manipulation techniques. Recently\, his creative research was recognized at ICMC 2025 in Boston\, where his work was presented. Committed to expanding the boundaries of contemporary music\, Hyun continues to refine his expertise in the evolving landscape of electroacoustic composition. \n  \nThanos Polymeneas-Liontiris: Entomology#2\nEntomology#2 (2025) is an acousmatic work dedicated to the secret life of insects. It follows on Tettix-A’ (2022 – inspired by the song of cicadas) and Entomology#1 (2024 – an acousmatic miniature based on the voice of a single imaginary insect). Entomology#2 invites the ear to a bustling landscape: an imaginary pond\, a hyper-realistic dense forest\, a place where countless microscopic flying voices weave their own world. The material of Entomology#2 derives from recordings of a prepared grand piano (in German Flügel\, in Dutch Vleugelpiano: i.e. piano with wings). The piece is based on the navigation of a corpus made of these recordings\, processed to such extent as to be stripped from any obvious piano connotation: metaphysically the notion of “wings” is the only association kept from those original prepared piano recordings. The corpus of these processed sounds unfolded into a layered and multi-dimensional field\, inspiring an exploration similar that of spatial-sonic explorations of a filed-recording. The result of such explorations is a soundscape filled with densities\, like countless flying beings swarming and coexisting. Entomology#2 draws the listener to immerse into a synthetic\, living-like system of microscopic organisms\, where communication\, competition\, and adaptation unfold collectively in an endless dance. \nAbout the artist\nThanos Polymeneas-Liontiris is a composer\, sound artist and Assistant Professor (Music & Interactive Media)\, at National & Kapodistrian University of Athens\, Greece. His practice comprises computer-aided compositions\, interactive audiovisual installations\, immersive audiowalks\, generative art\, interactive music for dance\, theatre and intermedia performances. He has obtained a BA in Double Bass\, and a BA in Electronic Music Composition from Rotterdam Conservatoire\, while following courses at the Institute of Sonology (Royal Conservatoire of The Hague) and at IRCAM. He completed two MA degrees\, both with distinction: in Art and Technology (Polytechnic University of Valencia) and in Creative Education (Falmouth University). In 2019 he concluded his PhD research aided by a fully funded CHASE-AHRC scholarship at University of Sussex. He has taught in Higher Education since 2011 (Falmouth University\, University of Sussex\, University of Brighton\, Ionian University and National & Kapodistrian University of Athens). His works have been presented\, among others\, at Tectonics Festival\, Modern Body Festival\, Athens and Epidaurus Festival\, Holland Festival\, Todays Arts\, Attenborough Centre\, Kalamata International Dance Festival\, The Athens Concert Hall\, Onassis Foundation\, Biennale of Young Artists from Europe and the Mediterranean. His publications encompass subjects related to Pedagogy\, Technology and Aesthetics. \n  \nRay Tsai: Jetlag – Time Difference\nJetlag – Time Difference is a fixed-media electroacoustic work that explores the relativity of time perception and relational temporality through sound. The piece juxtaposes three overlapping yet unsynchronized temporal systems: biological time represented by heartbeats and bodily rhythms\, social time shaped by daily routines and notifications\, and mechanical time articulated through clock mechanisms and pulses. Through processes of temporal displacement\, fragmentation\, reversal\, and spectral transformation\, these functional temporal references gradually lose their stability and dissolve into textural sonic states. Beyond individual perception\, the work also reflects intersubjective temporality—how differing rhythms and internal clocks remain subtly connected through traces of memory\, anticipation\, and interaction\, even under temporal dislocation. Rather than resolving into synchronization\, Jetlag – Time Difference presents time as a fragile\, shifting network of relations that persists in misalignment. \nAbout the artist\nRay Tsai (Tsai Yi-Jui)\, born in Hsinchu and currently studying at National Yang Ming Chiao Tung University\, is a DJ\, music producer\, and new media artist. His work spans sound art\, electroacoustic music\, and video installation\, using experimental sonic structures to explore the relationship between technology and perception. Under the alias †Egothy†\, he is active in the underground electronic music scene\, performing noise\, deconstructed electronics\, and other avant-garde styles that shape sensory experiences oscillating between chaos and order. \n  \nJuan J.G. Escudero: Lazy whirls of glow\nThe combinatorial structure of a triangulated dodecahedral three-manifold is used in the formal design of this work. This type of spaces is considered for modelling the spatial structure of multi-connected universes. The basic sound materials were recorded in an acoustic piano which\, due to certain circumstances\, remained in silence for a long time. \nAbout the artist\nJuan J.G. Escudero is a composer and researcher based in Madrid (Spain). He received his musical education at several centres and conservatoires and studied composition with Francisco Guerrero Marín in Madrid. He has carried out research and teaching activities in mathematics\, physics and music technology at various universities. The results of his research in the fields of algebra\, geometry and astronomy -published in scholarly journals and books- have been some of the main guides to formalization procedures. Harmonizations of aperiodic ordered temporal sequences\, which are on the basis of the formal and rhythmic structures play a major role in several of his instrumental and acousmatic works. More recent formal approaches are related with the analysis of the topological invariants of aperiodic tiling spaces and the construction of singular hypersurfaces in algebraic geometry. Extramusical influences are connected mainly with philosophy\, poetry and visual arts. \n  \nEmilio Casaburi: Pumma\nThe past is no longer forbidden: through technology\, lost relationships and forgotten spaces can be revisited. ‘Pumma’ seeks to narrativize this experience\, drawing on old VHS recordings of my family as its sonic foundation. The piece unfolds a journey across space and time\, in search of renewed connections with lost ones. It blends acousmatic syntax\, sonic imagery and textual fragment. An attempt to harness the full potential of the acousmatic condition to project a narrative of memory\, distance\, and re-discovery. \nAbout the artist\nEmilio Casaburi (b. 1999) is a sound artist and composer from Italy. His artistic output includes acousmatic compositions\, field recordings\, audiovisual works\, and installations. He graduated in Electronic Music under the guideship of Alessandro Cipriani in Frosinone and is now studying at the Institute of Sonology in Den Haag. \n  \nYi-Hsien Chen: Quivering Silk\nQuivering Silk is a fixed-media electronic work\, currently realized in stereo\, with the possibility of diffusion in an eight-channel format. All sound materials in the piece are captured from the Chinese 21-string zither (guzheng). The guzheng is capable of producing a rich spectrum of timbres through a wide variety of plucking\, sliding\, and glissando and sweeping techniques. In this work\, these instrumental sounds are used to electronic transformation\, layering\, and distortion\, gradually unfolding into large-scale waves of sound that intends to immerse the listeners. Within these sonic waves\, traces of identifiable guzheng techniques occasionally emerge; at other moments\, however\, the causal relationship between hand gesture and sound becomes ambiguous. This shifting perceptual boundary invites the listeners to reimagine the instrument beyond its physical constraints and to imagine new possibilities for its vibrational behavior. In this work\, the title Quivering Silk refers to the vibration of the guzheng strings\, which is not limited to the physical vibration produced by finger gestures\, but also refers to an electronic vibration shaped through digital sound processing and transformation. \nAbout the artist\nYi-Hsien Chen is a Taiwanese composer. He has received degrees from Taipei National University of the Arts and National Taiwan Normal University. In 2016\, he began to pursue Ph.D. with major in music theory and composition at UC San Diego where he studied with Katharina Rosenberger\, Chinary Ung\, and Lei Liang who is his advisor and committee. He was awarded with full scholarship from UC San Diego for five years. He is currently teaching at the Department of Music in National Sun-Yat Sen University. Chen engages in composing for a wide variety of musical styles and involving multi-disciplinary collaboration. He has created music spanning across various instrumentations including orchestra\, ensemble\, electroacoustics\, theater music\, and soundtrack. His works has been selected and performed by renowned ensembles at festivals\, such as Mivos Quartet in “June in Buffalo\,” National Taiwan Symphony Orchestra in the competition of “Voice of the New and Brilliant – The Sound of Formosa\,” and “Weiwuying International Music Festival.” \n  \nPingting Xiao: Sonic Echoes of Ink\nThis composition\, Sonic Echoes of Ink\, explore in the theory of embodied music cognition. It focuses on the relationship between body movement\, piano performance\, and sound manipulation. All sound materials are recorded from the piano\, including traditional keyboard playing and string plucking\, constructing sound traces reminiscent of ink painting through variations in single notes\, chords\, and resonant timbres. Additionally\, the work incorporates EMG (electromyography) sensor data\, collecting changes in muscle tension in the performer’s forearm and mapping the data to sound parameters. This allows the tension\, release\, and movement continuity to directly participate in the generation of musical structure. In this way\, music is no longer merely the result of being “played”\, but a process of co-writing by the body\, movement\, and sound. \nAbout the artist\nPingting Xiao\, a PhD student at the University of Manchester. I’m interested in embodied music cognition interacts with cultural heritage and creative technology to create motion-responsive performance and visual works. She is dedicated to integrating Chinese traditional culture with music interaction\, exploring how ancient cultural elements can be harmonized with modern interactive technologies. She also seeks to inspire and lead a community of like-minded composers in China\, encouraging collaboration and participation in creative endeavours. \n  \nXiaoyu Su: The Luminosity of the Yugen Mist\n“The Luminosity of the Yugen Mist” is a fixed media (acousmatic) work that constructs a surreal sonic architecture from the organic timbres of flute\, bamboo flute\, and piano with extended techniques. Divorced from live performance\, the piece focuses entirely on the spectral transformation and spatial reshaping of these acoustic sources. Rather than depicting a clear narrative\, the music remains suspended in an unstable perceptual state—sound is continuously perceived but never fully resolved. Informed by the Japanese aesthetic of Yugen (subtle grace and mysterious depth)\, the work approaches sound as something partially concealed rather than fully revealed. The recorded materials function as indistinct traces of the physical world\, heard through a sonic haze rather than presented as fixed representations. Through granular processing and spectral resynthesis\, these concrete sounds are gradually destabilized\, dissolving into a luminous\, synthetic texture. The piece does not seek a final resolution; instead\, it oscillates between obscurity and clarity\, leaving the boundary between the acoustic and the electronic deliberately ambiguous. \nAbout the artist\nXiaoyu Su is a composer and researcher currently based in Japan. He is a first-year Master’s student in the Composition Course at the Graduate School of Showa University of Music\, where he also works as a Teaching Assistant for Harmony. In March 2025\, he graduated with honors from the Digital Music Department of Showa University of Music (Junior College Division). His musical training began with electronic organ studies at the age of five\, followed by pop vocal training during adolescence. He holds a bachelor’s degree from the School of Media and Design at Zhejiang University Ningbo Institute of Technology. Prior to relocating to Japan in 2022\, he worked as a music teacher at Ninghai County Experimental Primary School while engaging in sound design and music production activities in China. His recent works focus primarily on electronic and acousmatic music and have been presented at events including the Showa Digital Music Live (2023\, 2024)\, the 28th Composition Concert (2024)\, and the Inter-College Sonic Arts Festival (ICSAF) 2024. In 2024\, he was selected as one of two presenters for the Graduation Concert at Showa University of Music. He has studied composition under Daisuke Okamoto and Masatsune Yoshio. Currently\, his practice centers on the creation and academic research of electronic music. \n  \nPak Hei Leung: Vocalise\nVocalise (2026) serves as an exploration on the meaning of the voice in the digital age. The piece utilizes SoundID VoiceAI\, an AI voice changer\, to generate audio from the software’s vocal and instrumental packages\, based on my recorded vocal input. What is heard in the work is a compilation of human vocal recordings\, as well as various snippets of audio clips generated from the tool as a response to the recordings. The vocal clips recorded\, varied between around 10-40 seconds\, include free improvisation that explores extended vocal techniques (e.g.\, vocal fry and mouth sounds)\, as well as some gestures or phrases. After generating them\, I selected specific snippets and clips to compile a musical work. In addition to the quality of the sounds\, I am interested in moments that sounds particularly digital: either that there are artifacts or glitch in the sounds\, or that what is being “sung” or “played” is almost impossible for a human performer. Various Digital Signal Processing tools\, such as reverb and tremolos\, are added as suited. As snippets of human voice are integrated as part of the piece alongside AI-generated audio\, it is expected that audience might not be able to distinguish between the two. This resonates with the artistic goal of the piece being to explore the voice – something I perceive as highly connected to one’s identity – in the further digitalized world. This piece also explores possibilities of using the voice (or audio signals in general) to form musical gestures and shapes in different timbres with the aid of AI tools like this. Remarks: according to Sonarworks’ website\, voices that are used in SoundID VoiceAI are from artists who voluntarily worked with them and were compensated. \nAbout the artist\nThe compositions of Pak Hei (Alvin) Leung have been presented in various places in North America\, South America\, Europe and Asia. His music has been performed by music groups including Mivos Quartet\, Transient Canvas\, Rosetta Contemporary Ensemble\, Trio Mythos\, Duo Antwerp and Hong Kong Chinese Orchestra. His works have been featured in occasions including the ICMC\, International Symposium of New Music\, International Review of Composers\, Seoul International Computer Music Festival\, MUSLAB\, SEAMUS National Conference\, CMS National Conference\, SCI National Conference\, NSEME\, Electric LaTex Festival\, VIPA Festival\, June in Buffalo\, CMS Great Lakes Conference\, EMM and Hong Kong Contemporary Music Festival. Alvin is currently a PhD candidate in Music Composition at the University of North Texas. He received a Master of Music degree at Bowling Green State University\, and a Bachelor of Arts in Music from the Chinese University of Hong Kong (CUHK). His principal teachers include Joseph Klein\, Panayiotis Kokoras\, Marilyn Shrude and Wendy Wan-ki Lee. www.alvinleung.com/ \n 
URL:http://icmc2026.ligeti-zentrum.de/event/listening-room-2-3/
LOCATION:Hamburg University of Technology\, Building A (A 0.14)\, Am Schwarzenberg-Campus 1\, Hamburg\, 21073\, Germany
CATEGORIES:13-05,Listening Room,Music
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260513T110000
DTEND;TZID=Europe/Amsterdam:20260513T173000
DTSTAMP:20260505T121343
CREATED:20260421T185112Z
LAST-MODIFIED:20260428T113947Z
UID:10000181-1778670000-1778693400@icmc2026.ligeti-zentrum.de
SUMMARY:Listening Room 1
DESCRIPTION:Fixed Media | Program Overview\nPerseverance: An Artist Rendering\nMikel Kuehn \nThe Archival of Memory in Skin\nJoan Tan \n#paris\nTaito Fushimi \nAsymmetric Stamina\nAndreas Weixler \nCHAOTIC ITINERANCY\nWonseok Choi \nCorrosion Chamber\nHector Bravo Benard \nDew\nTom Bañados Russell \nFully Automated Luxury Music (selected tracks)\nFelipe Tovar-Henao \nLein\nKim Hedås \nOn the transparency of seeing through\nSean Peuquet \nThe Eternalist Paradox\nJuan Carlos Vasquez \nUnwritten Glow\nWen-Chia Lien \nVox Dei\nTomás Koljatic S. \nWhale Song Stranding\nDavid Nguyen \nWhere am I in the Universe?\nHanae Azuma \nDroplet\nJong Gyun Kim \n  \nAbout the pieces & artists\nMikel Kuehn: Perseverance: An Artist Rendering\nIn late February of 2021\, I was astonished to discover that NASA made several raw recordings of the recently landed Mars 2020 Perseverance Rover available to the public. Inspired by the first ever recorded (atmospheric) sound from another planet\, I began fantasizing about what the sonic environment of Mars might be like. This piece was constructed solely from four recordings capturing the sounds of the Martian wind\, the rover driving\, the rover’s mechanical parts (dust blower and various moving components)\, and the laser shots used to examine the properties of rocks. One additional recording was used: the inflight noise of the heat rejection fluid pump (recorded through the mechanical parts since there is no actual sound that propagates thought the vacuum of space). These minimal source sounds were then processed\, spatialized\, and combined/expanded into various suggestive textures. The result is my “artist rendering” of a fantastical narrative of the Rover’s journey though the sonic landscape of Mars. Its title is also a nod to the perseverance within each of us as we learn to navigate through the global pandemic. Perseverance: An Artist Rendering opens with an imaginary camera zooming from deep space onto the lonely flight of the spacecraft as it sets up for entry into the Martian atmosphere\, then lands. In the short sequence immediately following\, most of the source sounds that are used to build the piece are exposed in context with the work’s formal narrative. From this moment on\, the journey moves from fairly literal to fictional\, even absurd\, as the rover drives though multiple sonic terrains such as a “machine” sequence\, a “thunderstorm\,” then encounters various “creatures” as it continues on its strange journey and eventual death. \nAbout the artist\nThe music of American composer Mikel Kuehn has been described as having “sensuous phrases… producing an effect of high abstraction turning into decadence\,” by New York Times critic Paul Griffiths. He has received awards from the Barlow Endowment\, the Chicago Symphony\, Composers\, Inc.\, the Copland House\, the Destellos Competition on Electroacoustic Music\, the Alice M. Ditson Fund\, the Flute New Music Consortium\, the Fromm Music Foundation\, the Guggenheim Foundation\, the League of Composers/ISCM\, and the Ohio Arts Council. Kuehn is professor of composition at the Eastman School of Music where he directs the Electroacoustic Music Studios @ Eastman (EMuSE). mikelkuehn.com \n  \nJoan Tan: The Archival of Memory in Skin\nI am a conflux of cultures — shaped by environments and circumstances I do not fully understand. A living juxtaposition of beliefs\, a contradiction of behaviours. An oxymoron. I’m learning to hold all of these parts together. The piece is built from sound fragments that remind me of childhood: Hokkien soap operas and theatre shows my grandma watched\, the voices of children at the playground where I once used to play\, the creaking of old treadle sewing machines\, the static of ageing radios\, the ticking of analogue clocks\, the English news on TV at 8pm usw. These unlikely combinations of sounds flitter between one another\, at times dissonant\, jarring even\, but always coexisting. When the physical world inevitably fades\, I hope their voices will still remain. \nAbout the artist\nJoan Tan Jing Wen (born 21 April 2000) is a Singaporean composer currently based in Cologne\, Germany. Her recent works places attention\, perception and the fallibility of memories at the forefront of her compositions. She is fascinated by how attention constructs and distorts one’s perceptions\, and shapes one’s entire experience\, both in music and in everyday lives. She believes that every sound triggers a sensory response\, engages ones imagination and evokes emotions through associations. Recognisable sound sources are distorted in her works\, leaving behind crafted gestures and faint memories of what they once were. \n  \nTaito Fushimi: #paris\n#paris is a piece developed during a one-month stay in Paris. It uses audio data circulated on social media and associated with specific locations\, treating these recordings as the environmental sounds of those places. The collected audio is processed through AI learning and generation\, and subsequently recomposed to form the final sound composition. On social media platforms\, cities are primarily consumed as visual objects. On platforms such as Instagram\, on-site sound environments are often replaced by trending music or narration\, and are intentionally muted or edited. As a result\, these audio elements begin to function as urban soundscapes formed within media\, distinct from those of the physical city. This work applies this approach to representations of Paris on social media. By presenting Paris as an auditory experience composed of multiple\, overlapping layers mediated through digital platforms\, the work explores the relationship between sound circulating in digital space and the city\, and offers a reconsideration of how contemporary urban environments are perceived. \nAbout the artist\nTaito Fushimi. Born in Japan in 2003. He aims to express dimensions of the city inaccessible in everyday life through visual and sonic forms. \n  \nAndreas Weixler: Asymmetric Stamina\nThe electroacoustic multichannel composition was created during a Composer-in-Residence at the VICC – Visby International Composers Centre in Sweden in 2025 in Studio Alpha. All sounds were recorded on the island of Gotland. Studio recordings of electric guitar and voice processed in real time form a fundamental musical framework of the composition. A special source of inspiration were the weekly gatherings of automobile enthusiasts every Wednesday at the harbor of Visby. Carefully restored vintage cars\, American cruisers\, and newly modified vehicles – all equipped with powerful V8 engines\, even a motorcycle. These deep resonant sounds became a central element of the sonic world\, contrasted by the presence of young car posers noisily circling through the night. This urban soundscape stood in striking opposition to the dramatic cries of the seagulls and the creaking of the floating piers in the Baltic Sea harbor. Production tools have included Pro Tools with plugins such as GRM SpaceGrain\, Sound Particles Brightness Panner\, R360 surround reverb\, Seventh Heaven 5.1\, Acon Multiband Dynamics\, and Stratus Reverb 7.0\, as well as Max programming for multichannel live processing\, including granular synthesis\, spectral delay\, FFT filtering\, ring modulation\, and FFT freeze reverb. Credits: Field Recordings: Author2\, Author1 Voice: Author2 Composition\, electric guitar & real-time Processing (Max): Author1 The creation of this work was supported by The Swedish Arts Grants. \nAbout the artist\nAndreas Weixler\, born 1963 in Graz\, Austria\, is a composer for computer music with an emphasis in intermedia realtime processing. He is teaching at the mdw Vienna\, InterfaceCulture in Linz and serves associate university professor at the CMS – computer music studio of Anton Bruckner University in Linz where he initiated the intermedia concert hall the Sonic Lab. Studies of contemporary composition at KUG in Graz\, Austria with diploma by Beat Furrer\, completed by international projects and residencies. \n  \nWonseok Choi: CHAOTIC ITINERANCY\n‘CHAOTIC ITINERANCY’ is a power electronics piece realizing harsh noise and glitch textures. Simple signals pass through an effects chain aimed at heavy distortion to gain saturated textures. Here\, they lose their original forms and are rebuilt into new ones. The listener perceives the deconstructed sound and its remaining essence simultaneously. As the processing shifts\, the listener is placed right in the middle of the distorted sounds’ itinerancy. Three sections themed ‘Accumulation’\, ‘Mutation’\, and ‘Derivation’ form chaotic textures using different methods. They share a goal of presenting fragmented sensations. Yet\, because the methods differ\, the area where itinerancy is felt and the character of the textures become distinct. Through this process\, I sought to find possibilities in excessively damaged materials. I also intended to sonically map this itinerancy by controlling methodologies and detailed elements. \nAbout the artist\nWonseok Choi (b. 1999) is a composer who pursues music situated at the boundaries of genres and media. In the realm of electronic music\, he constructs sounds using signal distortion and degradation as primary materials\, while in the acoustic realm\, he focuses on works that embody post-minimalism and alt-classical styles. His works have been presented by the Korea Electro-Acoustic Music Society (KEAMS)\, and he is currently pursuing a Master’s degree in Electroacoustic Music Composition at Hanyang University. \n  \nHector Bravo Benard: Corrosion Chamber\nThis composition integrates computer-generated sounds with recordings of struck and bowed metal plates. Over time\, these materials are transformed\, recursively processed\, and spatially projected within an immersive environment surrounding the listener. As the piece unfolds\, the sonic textures grow progressively denser and more chaotic\, gradually distorting and ultimately destroying their original source. The title Corrosion Chamber evokes devices used to test the resilience of metals exposed to harsh conditions over time. It also alludes to metaphorical “chambers\,” such as those of government institutions\, where the original intent of laws and policies can be eroded and twisted to serve power at the expense of the public good. It also suggests the decay of rational thought within social-media echo chambers and through the careless use of AI tools. The piece was originally produced in 7th order Ambisonics. \nAbout the artist\nHector Bravo Benard. Originally from Mexico City\, he studied philosophy and music at the University of Victoria (Canada)\, and later at the Xenakis Centre (France)\, the Institute of Sonology and the Royal and Rotterdam Conservatories (Netherlands)\, the National Autonomous University of Mexico\, the University of Washington’s DXARTS Center (USA)\, and the University of Birmingham (UK)\, where he received his Ph.D. He composes sound-based music for acoustic instruments\, live electronics\, and fixed media\, with a focus on timbral and spatial elements\, and natural phenomena such as non-linear dynamical systems. Some of his main teachers over the years include Agostino Di Scipio\, Julio Estrada\, Scott Wilson\, Clarence Barlow\, Paul Berg\, Gilius van Bergeijk\, René Uijlenhoet\, Gerard Pape\, Carla Scaletti\, Michael Longton\, Christopher Butterfield\, Andrew Schloss\, and Alex Dunn. His works have been presented internationally at events such as ICMC\, BEAST FEaST\, MA/IN\, SEAMUS\, Gaudeamus\, NYCEMF\, Sonorities Belfast\, Espacios Sonoros\, ACMA\, FIMNME\, Sound/Image London\, and the Kyma International Sound Symposium. He currently lives in the Netherlands and Germany\, working as an independent artist\, researcher\, and music software developer. \n  \nTom Bañados Russell: Dew\nDew is a concept piece built around a simple but flexible process that allows for great musical expression and freedom depending on the situation. It can be set up for a large variety of speaker setups and durations. The concept is based on a Haiku by Kobayashi Issa: This world of dew is a world of dew\, and yet\, and yet It focuses on change through repetition\, impermanence and the complex being built around the simple. While the piece could theoretically last for ever\, it must eventually end. \nAbout the artist\nTom Bañados Russell is a Chilean composer and electronic music performer. They completed a bachelor in composition at the PUC Chile\, with a following Master’s degree in the HMTM-Hannover in 2026. Their most recent work has focused on duos between an instrumental musician and live electronics. Their music has been performed by a variety of groups at festivals such as Klangbrücken and Impuls Academy\, by performers such as the Elision Ensemble. Among other accolades\, they received the Scholarship for Musical Excellence of the PUC and the Lower Saxony Scholarship for Innovative Composition. \n  \nFelipe Tovar-Henao: Fully Automated Luxury Music (selected tracks)\nTrack selection from the upcoming album\, “Fully Automated Luxury Music”: 3. caprice 6. waltz 8. nocturne Fully Automated Luxury Music (F.A.L.M) is an open-source\, generative music album. The music is written as\, and generated through\, stochastic algorithms—probabilistic\, rule-based processes designed to produce finely structured yet potentially infinite variants of a musical output\, in the form of audio files. The code is end-to-end (E2E)\, meaning it generates and assembles the entire album from scratch—in other words\, it’s a fully reproducible and parameterized work. This album serves primarily as a proof of concept for open-source music—a still recent and under-explored compositional practice (see\, for instance\, Pierre Cusa AKA Pure Code’s Ambient Garden Album)—and as a reflection on recent developments in AI automation\, what they mean for the future of artistic practice\, and how human expression can remain central to algorithmic design. The title is a wink and nod to Aaron Bastani’s popular book\, “Fully Automated Luxury Communism: A Manifesto”\, which offers a cautiously optimistic\, though increasingly unlikely\, utopian vision of technology’s impacts on society. \nAbout the artist\nFelipe Tovar-Henao is a US-based multimedia artist\, developer\, and researcher whose work explores computer algorithms as expressive tools for human creativity\, cognition\, and pedagogy. His music is often motivated by and rooted in transformative experiences with technology\, philosophy\, and cinema\, and it frequently focuses on exploring human perception\, memory\, and recognition. As a composer\, he has been featured at a variety of international festivals and conferences\, including TIME:SPANS\, the International Computer Music Conference\, the Mizzou International Composers Festival\, the Ravinia Festival\, the New York City Electroacoustic Music Festival\, WOCMAT (Taiwan)\, CAMPGround\, the Electroacoustic Barn Dance\, CLICK Fest\, the SCI National Conference\, the SEAMUS National Conference\, the Seoul International Computer Music Festival\, CEMICircles\, IRCAM’s CIEE Summer Contemporary Music Creation + Critique Program and ManiFeste Academy\, Electronic Music Midwest\, and the Midwest Composer Symposium. He has also been the recipient of artistic awards and distinctions\, including the SCI/ASCAP Student Commission Award and the ASCAP Foundation Morton Gould Young Composer Award. He is currently Assistant Professor of AI and Composition at the University of Florida. \n  \nKim Hedås: Lein\nLein is music that stems from the history of both organ music and electroacoustic music. Although these two fields have followed different paths through history\, they share some similarities\, not least through experiments that explore and expand both space and time. By listening backwards\, certain lines of origin can be transferred from the past to the present\, sometimes clear and recognisable\, sometimes distorted and fragmented. Microscopic units of rhythm form polyphonic lines as well as alloys of sound\, dynamically connecting what was previously unconnected. Lein is a multichannel fixed-media piece that has been performed at concerts and festivals in Sweden\, Germany and at New York City Electroacoustic Music Festival 2025. In June 2025\, Lein won two prizes at the international acousmatic composition competition at the Weimarer Frühjahrstage Festival in Germany: Second Prize and the Audience Award. \nAbout the artist\nKim Hedås\, born 1965\, Swedish composer and researcher\, PhD\, Professor of composition at the Royal College of Music in Stockholm (Kungliga Musikhögskolan) and a member of the Royal Swedish Academy of Music (Kungl. Musikaliska Akademien). \n  \nSean Peuquet: On the transparency of seeing through\nR. Murray Schafer pointed out in 1977 that our soundscape is increasingly lo-fi\, often the sound of traffic or\, especially at the Atlantic Center for the Arts where this piece was composed\, planes. While quiet is harder to come by\, there are wonderful new sounds too\, like the spray-paint can clicking of a hard-disk failure or powering on a belt sander. And yet\, we increasingly fetishize a return to not just natural soundscapes\, but the natural. Once we frame nature as being different (as a thing to return to)\, reality becomes an appearance of itself— obfuscating the naturalism of architecture\, pharmaceutics\, and software engineering under a guise of transparency. Are we ourselves not the nature to which we desire to return? In the “broken” appearance of this composition’s soundscape\, perhaps we can hear ourselves in relation to the natural world as\, echoing William Carlos Williams\, “touched but not held\, more often broken by the contact.” \nAbout the artist\nSean Peuquet is a composer and educator. He presents his work regularly at national and international venues for contemporary art and music such as ICMC (Limerick\, Daegu\, Shanghai\, Utrecht\, Ljubljana\, Belfast)\, SMC (Cyprus)\, NYCEMF\, TIES (Toronto)\, KEAMS (Seoul)\, Sines and Squares (Manchester\, UK)\, SEAMUS\, SCI\, EMM\, VU Symposium\, and more. In 2022\, Sean’s piece “Plane of Slight Elevation” (2021) was awarded Best Music: Americas by the ICMA. He has received numerous commissions for concert music\, installations\, and artist workshops at venues including Communikey (CMKY)\, The Ellie Caulkins Opera House\, and Museum of Contemporary Art (MCA) Denver. In 2020\, Meow Wolf commissioned Sean to compose an immersive and generative music and sound installation as part of their permanent Denver exhibition space\, Convergence Station\, opening to the public in 2021. Sean has been artist-in-residence at the Atlantic Center for the Arts in New Smyrna\, FL and ART 352 in Fort Collins\, CO. Sean is Dean of Art + Design and an Associate Professor at Rocky Mountain College of Art + Design in Denver\, CO. Prior to becoming Dean\, he served as Chair of the Music Production department at RMCAD for 5 years. Between 2015 to 2020\, Sean was the Program Director and Lead Music Instructor for the Madelife Creative Accelerator program in Boulder\, CO. \n  \nJuan Carlos Vasquez: The Eternalist Paradox\n“The Eternalist Paradox” is an 8-channel acousmatic piece recorded with a chromatic button accordion and live electronics. It explores the paradoxical realm of eternalism\, where past\, present\, and future coexist. Through an intricate interplay\, a Max application implemented processes from recordings sourced from diverse eras of creation\, intricately woven into a singular texture. This repurposed musical journey challenges conventional notions of time and invites the audience to contemplate the profound interconnections within the ever-flowing river of existence. \nAbout the artist\nDr. Juan Carlos Vasquez (www.jcvasquez.com) boasts a remarkable trajectory as an award-winning composer\, video game researcher\, and academic. His creations\, ranging from spatial audio works to immersive interactive experiences and game art\, have resonated across continents\, being featured in over 30 countries spanning the Americas\, Europe\, Asia\, and Australia. Dr. Vasquez is currently an Assistant Professor in Computation and Design at Duke Kunshan University \n  \nWen-Chia Lien: Unwritten Glow\nUnwritten Glow is an Acousmatic piece that illustrates how memories return in elusive and shifting ways. The “glow” evokes the lingering fragments that surface within us when we are remembering\, an inner brightness that is gentle\, persistent\, and never fully graspable. Memory changes each time it resurfaces\, it may become blurred\, clearer\, softened\, or quietly altered. Sounds return in new shapes\, much like moments that reappear unexpectedly and never quite as they once were. This piece is not about a story\, but a space where subtle memories drift in and out\, inviting listeners to follow their own past and find their own version of the “glow” in the unfolding sonic world. This piece views memory as a living\, shifting presence rather than a fixed archive of experience. \nAbout the artist\nWen-Chia Lien is a Taiwanese composer and sound artist whose creative practice spans instrumental\, electroacoustic composition\, film scoring\, and experimental theatre. Her works often engage with social issues\, historical events\, and cultural inquiry\, seeking to integrate music and technology as a medium for dialogue and reflection between sound\, space\, and audience. Wen-Chia is currently pursuing a Master of Music at the University of Toronto. She earned her Bachelor of Music in Music Theory and Composition from the University of Taipei in 2024. In recent years\, she has delved into multimedia creation and electronic music. In 2025\, she participated in ilSUONO Contemporary Music Week. Her orchestral work\, Scars\, received Third Prize in the 2024 Composition Competition of the National Taiwan Symphony Orchestra (NTSO) and was premiered by NTSO. Her electroacoustic piece In Our Stomach was selected to perform at the 2023 C-LAB Sound Festival: Diversonics\, and she was a selected visiting artist for the C-LAB × IRCAM Communication Program in the summer of 2024. In 2023\, she was the music designer for Skin Box\, a theatre and dance production that was presented at the Taipei Fringe Festival. Her artistic work has been recognised with several awards\, including the 2024 Taiwan Ministry of Education Study Abroad Scholarship and the 2025 University of Toronto France–Canada Experience Award. \n  \nTomás Koljatic S.: Vox Dei\nVox Dei is a multichannel acousmatic musical composition inspired by the sounds of the popular Feast of the Virgin of Guadalupe of Ayquina. This traditional Catholic celebration takes place annually on the eve of September 8th\, bringing together thousands of pilgrims in the heart of the Atacama Desert (Antofagasta Region)\, Chile. Based on field recordings I made in 2023 and 2024\, the piece explores the fervor\, devotion\, and unique soundscape of this festival\, where music\, dance\, and faith intertwine in a collective experience of celebration and sacrifice. The sound material for the work was captured at different moments of the feast: songs and prayers of the pilgrims\, the brass and percussion bands that accompany the religious dances (playing disparate\, overlapping music at full volume in close proximity)\, the voices of those arriving after long desert pilgrimages\, and the climactic moment of the celebration when thousands of devotees sing “Happy Birthday” to the Virgin Mary. This exceptionally rich sonic material is not subjected to extensive electroacoustic processing. Instead\, it is deployed to create an immersive experience that transports the listener to the heart of this festival and invites us to reflect on the power of sound as a vehicle for spirituality. \nAbout the artist\nTomás Koljatic S. is a Chilean composer. After studying music and mathematics in his country\, he continued his higher education in composition at the Paris Conservatory (CNSMDP)\, where he studied with professors Frédéric Durieux (composition)\, Claude Ledoux (analysis)\, Denis Cohen (orchestration)\, Luis Naón\, Tom Mays\, and Karim Haddad (new technologies). Simultaneously\, he completed advanced training in music technology at IRCAM (Cursus 1). Currently\, he works as a professor at UC | Chile Faculty of Arts\, teaching courses in music analysis and history. \n  \nDavid Nguyen: Whale Song Stranding\nInflections as sound process to sound quality\nEmanating otherness of the\nSound quality to sound process from the reflective \nResulting in an immersive rhizome-like sound world of the\nomnipresent of the dream like and the very literal \nAs different zones are successive\, simultaneous\, above\,\nbelow\, before\, and after\, to neither rise nor sink but only\nfloat \nA longing as the friction\, disputes of the literal and\ndream-like \nAnd \nA persistence of a pulse\, heavy\, through the literal as a\nconstant movement and the abstract ingenuous stillness\, a\nsound world of the discursive and the narrative \nChiastic process and quality is undermined as the\nreflections and inflections recur in rounded proportions.\nThe immersive and form is only tangible through this\ninsistence that is perceived as a dream occurring in\nreal-time \nFiguratively\nWhale Song suggests\, quite literally\, uncertainty that is \nStuck between the discursive and the narrative\,\nThe moving streams/waves and the pure tones surrounding\nwithin\,\nStranding \nAbout the artist\nDavid Quang-Minh Nguyen is an audio engineer\, sound designer/re-recording mixer\, and composer of concert music. His current interests lie in composing acousmatic works that explore multi-channel loudspeaker expansion\, various types of sound spatialization\, and immersive audio. \n  \nHanae Azuma: Where am I in the Universe?\nAn acousmatic piece\, “Where Am I in the Universe?” for 8 channels\, is remixed from the 16-channel version (original\, 2017). It was inspired by the poem “Two Billion Light-Years of Solitude” by the Japanese poet Shuntaro Tanikawa (1931 – 2024). Most of the harmonies in this piece are adapted from standard chords on the sho\, a Japanese free-reed musical instrument. \nAbout the artist\nHanae Azuma is a composer from Tokyo\, Japan\, completed both her BM and MM at Tokyo University of the Arts\, Department of Musical Creativity and the Environment. During her studies in Japan\, she mainly concentrated on the relationship between music and other visual/performing arts such as dance and films and has been collaborating with contemporary dancers on various projects as a composer. She also completed her MM of music technology at New York University in 2014. Her works have been presented at music festivals and concerts in the United States\, Japan\, Korea\,Taiwan and so on. She is currently an academic fellow at Acoustic Lab\, Tokyo University of the Arts. \n  \nJong Gyun Kim: Droplet\nThis work is an artificial ambient soundscape centered on the sonic textures of water droplets. By integrating actual recordings of falling water with textures reconstructed through heterogeneous materials such as PET bottles\, the piece juxtaposes natural and synthetic audio elements. It aims to capture the organically evolving rhythms and textures of liquids\, while establishing a three-dimensional sense of perspective within the soundscape through the manipulation of auditory distance and variations in textural density. \nAbout the artist\nJong Gyun Kim is a South Korean composer specializing in electronic music. He earned his undergraduate degree from Senzoku Gakuen College of Music under Takeyoshi Mori\, transitioning from a classical music background. His artistic portfolio includes presentations at CCMC in Japan\, ICMC and NYCEMF. Currently\, he is continuing his research and composition as a Master’s student under Richard Daniel Dudas at the Graduate School of Music\, Hanyang University. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/listening-room-1-3/
LOCATION:Hamburg University of Technology\, Building A (A 0.18)\, Am Schwarzenberg-Campus 1\, Hamburg\, 21073\, Germany
CATEGORIES:13-05,Listening Room,Music
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260513T133000
DTEND;TZID=Europe/Amsterdam:20260513T153000
DTSTAMP:20260505T121343
CREATED:20260421T161440Z
LAST-MODIFIED:20260504T083328Z
UID:10000085-1778679000-1778686200@icmc2026.ligeti-zentrum.de
SUMMARY:Lunch Concert 3A
DESCRIPTION:Concert 3A offers a fascinating stage for the Steinway Spirio—the world’s most advanced self-playing piano system. In this session\, the piano is taken far beyond its traditional role: it acts as an autonomous performer\, a controller\, and even an interface for human brain activity. \nThis Lunch Concert is open to the public. Those without a conference pass can purchase a ticket here. \n  \nProgram Overview\nElevator Pitch\nJuan Vassallo \nChant\nYoonjae Choi \nMulholland Revisited \nHeloise Garry \n“Empathic Machines” for One Pianist’s Mind and Disklavier™\nMasatsune Yoshio and Atsushi Mori \nVoici que la saison décline\nMikako Mizuno \nExplode to Survive \nRichard Scott \nPlight of the Monarch\nSalvatore Siriano \n  \nAbout the pieces & artists\nJuan Vassallo: Elevator Pitch\nPhilosopher Hartmut Rosa suggests that our society is characterized by acceleration due to rapid technological advancements\, leading to constant time shortages. As we adapt to quick updates via smartphones and social media\, communication becomes faster and more fragmented\, favoring brief\, direct forms like the elevator pitch. An elevator pitch is a short summary speech meant to convey ideas or products within the duration of an elevator ride. It is aimed at being clear and persuasive to a wide audience.\nIn politics\, new communication techniques exploit these brief\, impactful messages\, often oversimplifying complex issues and lacking depth. Such strategies have been criticized for manipulating public opinion and stirring emotions\, leading to biased and divisive rhetoric that can aid authoritarian or intolerant movements.\nThe piece poses an artistic focus on these contemporary methods of communication -such as an elevator pitch- and the potential for manipulation of sound-bite content by political figures. The piece thus is a sardonic analogy to a political speech\, which is portrayed here as empty of substance\, and as a construct derived from a carefully crafted algorithmic rhetoric\, and the sonification of spoken phrases. Additionally\, nonsensical political speeches synthesized through commercial text-to-speech systems are used as sound material for the electronics. \nAbout the artist\nJuan Sebastián Vassallo is an Argentinian composer and live-electronics performer based in Bergen\, Norway. He holds a Ph.D. in Artistic Research from the University of Bergen. His artistic research explores human–computer interaction in art creation\, at the intersection of computer-assisted composition\, artificial intelligence\, algorithmic poetry\, generative visuals\, and live electronics. \nHis music has been performed internationally by ensembles and soloists including Projecto RED (Argentina)\, Quasar Saxophone Quartet (Canada)\, Hinge Quartet (USA)\, Vocal Ensemble Tabula Rasa (Norway)\, Edvard Grieg Kor (Norway)\, JÓR Saxophone Quartet (Scandinavia)\, Zone Experimental Basel (Switzerland)\, and Lucas Fels (Germany)\, among others. \nHis work has received multiple awards\, including first prize at the AI-based composition contest at the IEEE Conference on Big Data (Washington\, D.C.) for Oscillations (iii). Other distinctions include selections and awards from the National Endowment for the Arts (Argentina)\, ISCM/Chengdu River Sun Prize (China)\, and several contemporary art competitions. \nHe has received international grants from UNESCO-Aschberg and the Organization of Ibero-American States (IBERMÚSICAS)\, supporting artistic residencies in the United States. His practice is strongly collaborative and interdisciplinary\, and alongside his experimental work\, he maintains an active career as a tango pianist and arranger. \n  \nYoonjae Choi: Chant\nChant is a live electronic work that transforms the cello through vowel-based formant processing\, creating a hybrid vocal–instrumental language reminiscent of primordial voice. As part of a broader research project on real-time live electronics formant synthesis\, the piece explores how electronic modulation can expand instrumental identity and shape emotive\, multi-voiced textures. \nAbout the artist\nYoonjae Choi is a South Korean composer whose work explores the musical potential of extended tones and spectral qualities drawn from both traditional instruments and non-instrumental materials. His compositional practice focuses on integrating acoustic sound with live electronics\, soundscapes\, and computer-based technologies. He frequently collaborates across media arts and experimental music disciplines. \nHe studied with Richard Dudas at Hanyang University and with John Gibson and Chi Wang at Indiana University. He is currently pursuing a doctoral degree in composition at the University of North Texas\, studying with Panayiotis Kokoras. His music and research have been featured at international conferences and festivals. \n  \nHeloise Garry: Mulholland Revisited\nMulholland Revisited is an interactive composition for Yamaha Disklavier / MIDI keyboard and ChucK\, integrating real-time interaction between acoustic and electronic elements. By leveraging MIDI input\, the piece enables the piano to function as both a performer and a controller\, triggering ChucK-generated sound textures in response to live performance. \nInspired by a pivotal phone conversation in Mulholland Drive (Lynch\, 2001)\, the work explores the blurred boundary between dream and reality through a dynamic interplay between piano-generated material and algorithmic sound synthesis. The electronic elements emerge as an extension of the piano’s acoustic voice\, reinforcing the psychological tension that defines the narrative arc. An homage to David Lynch\, the piece mirrors his fascination with fractured identities and surreal atmospheres\, immersing the listener in a sonic landscape that expands the piano’s traditional interface into new musical and narrative dimensions. \nAbout the artist\nHéloïse Garry is an artist working at the intersection of filmmaking\, theater\, and performance\, exploring the aesthetics of totality across art forms. Her compositions reflect a deep interest in cross-cultural and linguistic experimentation\, and sonic storytelling. Her work has been presented at ICMC\, NIME\, NYCEMF\, ICAD\, Audio Mostly\, the Audio Engineering Society\, and the Internet Archive. As a Yenching Scholar at Peking University\, she researched the politics of independent Chinese cinema and the role of music in the films of Jia Zhangke. An artist-in-residence at Gray Area and the Mozilla Foundation in San Francisco\, she has collaborated with IRCAM and the Columbia Computer Music Center\, and explored the sonification of the universe under the mentorship of physicist Brian Greene. In September 2024\, she joined Stanford’s Center for Computer Research in Music and Acoustics (CCRMA)\, where she studies with Mark Applebaum\, Paul DeMarinis\, and Ge Wang. Héloïse holds bachelor’s degrees in Filmmaking\, Economics\, and Philosophy from Columbia University\, Sciences Po\, and Sorbonne University. \n  \nMasatsune Yoshio: Empathic Machines\nWhat lies beyond the pianist’s technical skill — music in which body and mind are fully integrated?\nIn this work\, a pianist’s brainwaves are sensed using the EMOTIV Insight device\, and the data is processed in Max 9 to generate performance information that is transmitted and played by a Disklavier™ piano.\nThrough this body-extended expression\, the resulting piano music — beyond human hand alone — becomes a speculative answer to the question posed above. \nAbout the artists\nMasatsune Yoshio (1972- ) was born in Kobe. He is a composer and Media Master No. 75. His specialty is the composition of fine art pieces using computers and the compositions are based on the creation of and research regarding algorithmic compositions\, acoustic synthesizing\, live electronics\, and expression with information technologies. His electroacoustic pieces were performed within and outside of Japan. He is an associate professor at Showa University of Music. \nAtsushi Mori is an Associate Professor at the Junior College Division of Showa University of Music.He completed his studies in the Department of Composition and the Graduate School at Showa University of Music\, studying under Kazuhisa Akita.\nIn 1987\, he received the Silver Prize in the A1 Category of the PTNA Piano Competition\, and in 1993\, he performed with the Warsaw Philharmonic as part of the Yamaha JOC overseas concert tour. He composed Fanfare for the “Festival of Student Orchestras” in 2002.\nIn addition to his work as a composer\, Mori is active as a keyboardist\, providing live support\, arrangements\, and recordings. He also specializes in music production using DAWs such as Ableton Live and Logic\, and is dedicated to the analysis of popular music and the development of solfège teaching materials. His research focuses on the integration of digital technology and music education. \n  \nMikako Mizuno: Voici que la saison décline\, for clarinet and electronics\nThe electronic part of this piece comprises sound files containing grains of different pitches and sizes\, all of which are derived from clarinet performance. These grains are placed in the field by spat. program and diffused through a cube-shaped multi-channel system. The subscribed version is rendered into four channels. The solo clarinet is required to produce special tone colours using multiphonic techniques\, breath tones\, harmonic colour trills\, etc. The subtle timbre of the instrument connects the minute changes in visual colours and the passing of time\, which were depicted in a poem by Victor Hugo.\nThe title of this piece comes from one of Hugo’s poems. At the end of summer\, the season seamlessly transitions to autumn. The bright blue sky turns grey\, the birds shiver and the grass feels cold. I tried to create sounds that reflect these slight changes and delicate nuances.\nThe clarinet’s multiphonic sound is enhanced by harmonised breath tones. The harmonisation\, realized by special signal processing\, involves not only layered pitches\, but also the filtering of noisy long breaths. In the performance\, especially in the latter half of the piece\, Max for Live is necessary to certify the effective interactive ensemble between the clarinet player and the electronic part\, which must fulfil the notated musical ensemble. The instrumentalist can play the piece according to the usual musical notation\, because some notated guides in the electronic part show the tempo and the nuance of phrase for the musician\, which are often the case in the latter half of this piece. The instrumentalist is sometimes demanded to catch the electronic un-pitched noisy sounds during the fermata or the rest. \nAbout the artist\nComposer/Musicologist. Mainly active in Japan\, her music has been heard in many places including France Germany\,Austria\, Hungary\, Italy\, Republic of Moldova\, and international festivals and conferences such as ISEA\, ISCM\, EMS\, Musicacoustica\, WOCMAT\, NIME\, ICMC\, NYCEMF. Her pieces range from orchestra\, chamber music\, vocal ensemble\, traditional Japanese instruments (sho\, koto\, shakuhachi\, no-flute\, biwa etc.) to networked remote performance through ipv6. \n  \nSalvatore Siriano: Plight of the Monarch\nMonarch butterfly populations face ongoing and compounding threats driven by habitat loss\, pesticide exposure\, invasive plant species\, and continued encroachment on open land where milkweed once thrived. Since the mid-1990s\, eastern migratory monarch numbers have fallen to a fraction of their historical peaks; although recent seasons have shown modest recovery\, populations remain far below long-term averages. \nWithin this context\, the work traces key stages of the monarch lifecycle\, including overwintering in Mexico\, migration\, mating\, and reproduction\, using scientific data from the Monarch Joint Venture and the U.S. Geological Survey translated into sonic parameters through additive and FM synthesis. Long-term population trends shape the evolving texture\, dynamics\, and rhythmic behavior of the sound\, allowing ecological data to inform the temporal and spectral structure of the audio. \nTranslation also operates across media. Original filmed footage from the Fox River Valley in Illinois\, a recurring migratory and breeding landscape for eastern monarch populations\, is transformed through point-cloud and depth-camera processes. Human presence and natural environments are rendered as shifting\, particle-based forms whose fragmentation mirrors the precarity of monarch habitats\, situating ecological data within a perceptual and embodied frame rather than a purely representational one. \nThe work concludes with documentation of a community-based public artwork that distributes milkweed seeds to local residents. While the piece does not involve direct audience interaction\, this closing gesture reframes participation as shared responsibility. Rather than positioning environmental change solely at the level of policy\, the work emphasizes individual and community-scale actions\, such as reducing pesticide use\, planting milkweed and other native species\, and allowing greater biodiversity within managed landscapes\, as tangible responses to ongoing habitat loss. Because eastern North American monarch butterflies lay their eggs exclusively on milkweed\, these localized decisions directly shape their capacity to survive and reproduce. \nAbout the artist\nSalvatore Siriano is a Chicago-based composer\, audiovisual artist\, and educator whose work explores the relationship between sound\, image\, and the natural environment through digital media. His recent works have been presented at Sound/Image Festival (UK)\, SICBM (Brazil)\, Seoul International Computer Music Festival\, Art Alive Festival (Portugal)\, WOCMAT (Taiwan)\, NOIS//E (Italy)\, as well as ICMC\, NYCEMF\, and SEAMUS. He is full-time music faculty at Triton College. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/lunch-concert-3a/
LOCATION:Hamburg University of Technology\, Building I\, Audimax 2\, Denickestraße 22\, Hamburg\, 21073\, Germany
CATEGORIES:13-05,Concert,Music
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260513T193000
DTEND;TZID=Europe/Amsterdam:20260513T210000
DTSTAMP:20260505T121343
CREATED:20260415T121938Z
LAST-MODIFIED:20260421T201129Z
UID:10000121-1778700600-1778706000@icmc2026.ligeti-zentrum.de
SUMMARY:[Off-ICMC] Concert | Florentin Ginot: "Disturbance"
DESCRIPTION:Photo: Florentin Ginot\n  \n“Disturbance” is an audiovisual solo performance that blends elements of concert\, video art\, and theater. With his double bass and analog synthesizers\, Florentin Ginot invites the audience on a live nocturnal journey. Past and present collide with ghostly glitches and pulsating electronic rhythms.  \nregistration required here \n  \nThe Off-ICMC\nMusic is what brings us together\, even when everything else pulls us apart.\nMusic everywhere—it is part of our everyday lives. And yet\, we’re hearing it performed live on analog instruments less and less. Instead\, it often reaches us through speakers or headphones\, as files\, from the cloud. What does music mean to you? What does it sound like today? Where does it begin—and where does it end?\nThe ligeti center invites you to listen more closely and discover new sounds—to explore\, experiment\, and play. This year\, ICMC HAMBURG 2026 revives an old tradition: the Off-ICMC\, a free and accompanying festival curated for the general public and anyone curious about computer music. \nAll Off-ICMC events are free of charge.  \n\n  \n \n 
URL:http://icmc2026.ligeti-zentrum.de/event/off-icmc-concert-florentin-ginot-disturbance/
LOCATION:Stellwerk Hamburg\, Hannoversche Straße 85\, Hamburg\, 21079\, Germany
CATEGORIES:13-05,Concert,Music,Off-ICMC
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260513T213000
DTEND;TZID=Europe/Amsterdam:20260513T233000
DTSTAMP:20260505T121343
CREATED:20260421T162148Z
LAST-MODIFIED:20260503T185705Z
UID:10000088-1778707800-1778715000@icmc2026.ligeti-zentrum.de
SUMMARY:Club Concert 3C
DESCRIPTION:Club Concert 3C is an exploration of the boundaries of collective improvisation and creative technology. The SPIIC Ensemble of the HfMT Hamburg presents a program in which the audience has a say\, algorithms extend historical works\, and artificial intelligence reinterprets human movement as a “hallucination.”\nIn the industrial atmosphere of the Speicher am Kaufhauskanal\, acoustic instruments merge with live coding\, neural synthesis\, and interactive notation. \nThis Club Concert is open to the public. Admission is free; registration is not required. \n  \nProgram Overview\nLiquid tensioning\nFernando Egido \nSinophony for Clarence\nJuan Arturo Parra Cancino \nChimerique\nJonathan Wilson \nNEBULA\nEnrique Tomás and Moisés Horta Valenzuela \nplastique\nSe-Lien Chuang and Andreas Weixler \nShamanic Protocol\nOscar Corpo \nA Walk in Polygon Field\nRob Canning \nDEPRECATED\nDenis Polec Vocal \n  \nAbout the pieces & artists\nFernando Egido: Liquid tensioning\nLiquid Tensioning is a work for violin and double clarinet\, live notation\, live generative system\, live electronics\, and attendees’ participation (category: Improvised work for ensemble and electronics (SPIIC+ Ensemble)). Liquid tensioning is a Collaborative and interactive work in which the work is real time created by the self-evaluation of the work. The attendees will evaluate the work via a web app\, and the musical generative system will change according to the evaluation in real time. The Musicians will receive notes via a live notation system on their mobile phones. The title of the works refers to the model of tensioning provided by the generative system based on a musical tensioning that is not related to the properties of the musical material. This work belongs to a series of works in which the composer creates a self-referential musical generative system based on the real-time evaluation of the work. The main musical material of this work is its evaluation. The work duration is about 10 minutes. \nAbout the artist\nHe studied composition with José Luis de Delás at the School of Music of the University of Alcalá de Henares and received musical training in workshops with composers\, analysts\, and interpreters around the LIEM or the GCAC. He studied Computer Music with Emiliano del Cerro.\nHe has published several papers at international conferences.\nHis works have been performed at festivals such as ICMC 2025-2024-2023\, Bled international festival\, SMC Conference Graz\, Convergence Festival\, Ars electronica Linz\, Atemporánea Festival\, AIMC 2022 conference\, EVO 2021\, OUA Electroacoustic Music Festival 2020\, ISMIR 2020 in Montreal. The Seoul International Electroacoustic Music Festival 2019\, the ACMC 2019 conference in Melbourne\, SID 2015 conference in New York\, Venice Vending Machine III\, the New York City Electroacoustic Music Festival\, JIEN in the Auditory 400\, La hora acúsmatica\, SMASH Festival\, Encontres Festival in Palma of Majorca\, and ACA. \n  \nJuan Arturo Parra Cancino: Sinophony for Clarence\nSynophonie for Clarence is an ensemble and live electronics work inspired by the formal and sonic principles of Clarence Barlow’s Sinophony I (1970)\, his first electronic composition. Rather than functioning as an arrangement or transcription\, this piece operates as an instrumental extension of Barlow’s electronic sound world\, translating and reactivating its core materials through acoustic performance and real-time electronic processes. \nThe work seeks to bring into the physical space of performance elements that\, in Sinophony I\, exist only in fixed media: continuous tones\, slow harmonic transformations\, beating frequencies\, and the perceptual tension between purity and instability. These characteristics are reimagined here as a living\, performative situation\, where instrumental sound and electronics merge into a single\, evolving spectral body. \nSynophonie for Clarence builds on methods developed by Juan Parra Cancino to extract performative salients from early electronic works—elements that can be embodied\, negotiated\, and reshaped by performers in real time. Through this approach\, the piece revisits historical electronic material not as an object to be preserved unchanged\, but as a dynamic field for exploration\, experimentation\, and renewed artistic engagement. The aim is not reconstruction\, but continuation: to recover underlying processes and extend their implications into contemporary performance practice. \nBy situating acoustic instruments\, live electronics\, and spatialized sound within a shared listening ecology\, the work foregrounds collective tuning\, timbral fusion\, and emergent beating phenomena as central musical forces. The ensemble functions less as a group of independent voices than as a composite oscillator\, shaped by subtle interactions and shared attention. \nThis piece is conceived as a tribute to Clarence Barlow—composer\, educator\, and friend—honoring both his pioneering contributions to electronic music and his enduring influence on ways of thinking about sound\, structure\, and musical intelligence. \nAbout the artist\nJuan Parra Cancino studied Composition at the Catholic University of Chile and Sonology at the Royal Conservatoire The Hague\, where he completed a Master’s degree in electronic music. He received a PhD from Leiden University in 2014 on performance practice in computer music. A guitarist trained in Robert Fripp’s Guitar Craft\, he has worked extensively in live electronics. He is a researcher at the Orpheus Institute and Regional Director for Europe of the International Computer Music Association (2022–26). \n  \nJonathan Wilson: Chimerique\n“Chimerique” is about the interaction of music and language. Written and premiered in 2017\, this composition is for an ensemble featuring improvisation\, narration\, and electronics. It was realized in a collaboration with poet and translator Patricia Hartland by incorporating her English translation of “Ravines of Early Morning” by Raphael Confiant into a musical setting. The title is taken from a word in this text. It is French for “chimerical\,” and it can be defined as 1: something that takes delight in illusions\, or 2: something that is utopian\, or unreal. The narrator forms associations with this word through various phrases and passages that relate to the part of the story in which the description of “chimerique” is elaborated. Throughout this performance\, the performers listen and react to the text spoken by the narrator (and electronics). They are accompanied by electronics that consist of fixed media and live electronics from two different patches in Max/MSP using additive synthesis and granular synthesis. The musical instruments are the source material for granular synthesis. The score for this composition uses hybrid musical notation with some traditional notation for pitch and some graphic notation that leads performers subsequently to interpret not only the spoken phrases\, but also the graphic notation in their parts to determine volume\, pitch\, rhythm\, articulation\, and contour\, thereby making improvisation a necessity. The narrator and performers work together to generate a spontaneously formed through-composed work that marries text and music. The form can be described as through-composed in six sections. In the first section the performers respond only to a single phrase. In sections 2-6 the performers respond not only to phrases that delineate each section but also respond to extended narration shifting from descriptions of dreams\, the night\, madness\, illusions\, and at the end the act of dreaming itself. \nAbout the artist\nDr. Jonathan Wilson’s works have been performed at the Ann Arbor Film Festival\, European Media Art Festival\, ICMC\, SICMF\, SEAMUS\, NYCEMF\, MUSELAB\, NSEME\, Napoleon Electronic Music Festival\, Iowa Music Teachers Association State Conference\, and Midwest Composers Symposium. He is the winner of the 2014 Iowa Music Teachers Association Composition Competition. Jonathan has studied composition with Lawrence Fritts\, Josh Levine\, David Gompper\, James Romig\, James Caldwell\, Paul Paccione\, and John Cooper. In addition\, studies in conducting have been taken under Richard Hughey and Mike Fansler. Jonathan is a member of Society of Composers\, Inc.\, SEAMUS\, ICMA\, and the Iowa Composers Forum. \n  \nEnrique Tomás and Moisés Horta Valenzuela: NEBULA\nArtists working with deep-learning audio models often find that exploring their high-dimensional latent spaces requires chance-based\, combinatorial\, or technically complex machine-learning techniques. While these approaches can reveal unexpected possibilities\, they also make it more difficult to deliberately guide the models toward outcomes that are musically meaningful or aligned with specific creative intentions. \nIn this improvisation for solo instrument and two performers on live electronics\, we present an alternative approach to create a more interpretable and musically guided latent space exploration. This approach leverages Principal Component Analysis (PCA) applied to pre-encoded RAVE (Realtime Audio Variational Autoencoder) representations to reorganize the latent data into clusters that can be navigated more deliberately in performance. PCA reorganizes the encoded data into clusters based on shared timbral characteristics\, producing data clouds directly connected to the sonic properties of the source material. By structuring access to the latent space in this way\, our method bridges the gap between open-ended exploration and purposeful control\, offering performers a clearer and more intuitive means of shaping sound. \nTo prepare the improvisation\, and prior to the concert\, the solo instrumentalist provides an eight-minute recording that defines the sonic domain of the performance. This recording is encoded and analyzed\, restricting exploration to regions of the latent space shaped by the performer’s own material and giving the electronic musicians a more focused and musically coherent landscape to navigate. During the live performance\, the solo instrumentalist and the two electronic performers interact within this PCA-organized timbral map. Their trajectories through the latent space—along with the evolving clusters and sonic transformations—are projected in real time\, allowing the audience to see how latent-space navigation corresponds to audible change. \nThe musical materials resulting from this setup combine structured instrumental improvisation with electronically generated textures derived from latent-space navigation. While the overall form is left to real-time decisions between the soloist and the live performers\, the resulting sound world often alternates between rhythmically driven motifs—loosely recalling the interactive dynamics of small jazz ensembles—and more abstract electronic layers shaped through PCA-guided trajectories. These electronic textures\, produced by traversing clustered regions of the latent space\, serve as harmonically and timbrally evolving fields against which the soloist can articulate phrasing\, gesture\, and dynamic contour. The custom-built performance interfaces allow the electronic performers to shape these materials with precision\, enabling a responsive interplay in which acoustic action and machine-learned transformations continually inform one another. \nAbout the artists\nEnrique Tomás (*1981) is a sound artist\, researcher and assistant professor at the Tangible Music Lab who dedicates his time to finding new ways of expression and play with sound\, art and technology. His work explores the intersection between sound art\, computer music\, locative media and human-machine interaction.\nAs an individual artist\, Tomás’ activity is centered around ultranoise.es and focuses on performances and installations with extreme and immersive sounds and environments. He has exhibited and performed in spaces of Ars Electronica\, Sonar\, CTM\, IRCAM\, IEM\, KUMU\, SMAK\, NOVARS\, STEIM\, Steirischer Herbst\, Alte Schmiede\, etc.\, and in galleries and institutions throughout Europe and Latin America. \nMoisés Horta Valenzuela is a self-taught sound artist\, technologist\, musician\, and researcher from Tijuana\, Mexico\, based in Berlin. His work spans computer music\, neural audio synthesis\, conversational AI\, and the politics of emerging technologies\, approached through a critical lens that connects ancestral knowledge with contemporary digital culture. He has presented work internationally at Ars Electronica\, NeurIPS ML for Creativity & Design\, MUTEK México\, MUTEK AI Art Lab Montréal\, Transart Festival\, CTM Festival\, Elektron Musik Studion\, and the Sound and Music Computing Conference\, among others. \n  \nSe-Lien Chuang and Andreas Weixler: plastique\ninteractive audiovisual comprovisation for e-quitar\, green leaves & i-hands – GLISS – Green Leaves Imaginary Scenic Score\nDuration: ca. 8 min \nAbout the artists\nAndreas Weixler\, born 1963 in Graz\, Austria\, is a composer for computer music with an emphasis in\nintermedia realtime processing. He is teaching at the mdw Vienna\, InterfaceCulture in Linz and serves associate university professor at the CMS – computer music studio of Anton Bruckner\nUniversity in Linz where he initiated the intermedia concert hall the Sonic Lab.\nStudies of contemporary composition at KUG in Graz\, Austria with diploma by\nBeat Furrer\, completed by international projects and residencies. \nSe-Lien Chuang is a composer born in Taiwan in 1965 and based in Austria since 1991. Her work focuses on contemporary instrumental composition and improvisation\, computer music\, and audiovisual interactivity. She has presented works and lectures internationally in Europe\, Asia\, and the Americas at events such as ICMC\, ISEA\, and NIME. From 2016 to 2019\, she taught for the Computer Music Studio at Bruckner University Linz. Since 1996\, she has co-run Atelier Avant Austria\, specializing in audiovisual interactive systems\, real-time processing and computer music. \n  \nOscar Corpo: Shamanic Protocol\nShamanic Protocol is an online sound ritual performed by a partially damaged virtual entity. Its memory is an incomplete and corrupted archive\, composed of residual sonic materials related to shamanic rituals\, music therapy\, sound-based healing practices\, and data derived from musical epigenetics. Reshaped by the available data and the presence of connected users\, these fragments are reprocessed and reorganised each time the system is accessed\, generating a sonic ritual that follows a recognisable structure yet never manifests in the same way twice. The sound ritual has no declared purpose: it remains unclear whether the entity performs the rite as an attempt to repair itself\, an act of archive restoration\, a process meant to affect human listeners\, or simply because this process constitutes its way of operating. The variability of the outcome may suggest either a gradual recovery or a progressive deterioration of the system. The resulting sonic output exists in a space between therapeutic effect\, system malfunction\, and autonomous algorithmic process. The shifts between fragile calm\, overload\, interruption\, and recovery reveal the instability of the system that generates it. No clear boundary is drawn between healing\, malfunction\, or expression: these states coexist and remain indistinguishable within the process. The rite can be experienced as a purely electronic process\, or human performers\, in any instrumental or vocal configuration\, may take part in its enactment. Musicians are invited to participate in the ritual rather than interpret a fixed musical text. Guided by an open\, interpretative score\, performers do not execute predefined material but engage in the ritual itself\, interacting with the electronic layer by listening\, responding\, and aligning their gestures with the evolving sonic environment. The notation offers indications of behaviour\, density\, register\, and gesture rather than prescribed material; in this way\, performers take part in the rite by freely amplifying\, refracting\, and destabilising the entity’s activity. The score prescribes no precise instrumentation or techniques; in this instance\, the ritual is performed with a string ensemble alongside soprano saxophone\, bass clarinet\, piano\, and percussion. Performers do not guide the system\, nor do they follow it; instead\, they remain in a state of attentive coexistence with its unfolding behaviour. Each performance is therefore situated\, shaped by specific conditions\, configurations\, and presences.\nThe process does not call for interpretation: repair and damage are no longer separable; function and meaning no longer distinguishable. \nAbout the artist\nOscar Corpo (born 8 April 1997\, Naples\, Italy) is an Italian composer based in Hamburg. He studied Composition and Multimedia Composition in Naples\, and is now a PhD candidate at the HfMT Hamburg\, focusing on AI and collective improvisation with Ensemble 404. His work spans electronic\, instrumental\, vocal\, improvisation\, and music theatre. He has collaborated with Alexander Schubert\, Berliner Philharmoniker\, La Biennale di Venezia\, and Lux Nova Duo\, among others. \n  \nRob Canning: A Walk in Polygon Field\nA Walk in Polygon Field is a graphic score environment for controlled improvisation\, composed for 1–4 instrumentalists with electronics and surround diffusion. Three polygons—pentagon\, hexagon\, heptagon—rotate at different rates\, producing polymetric phase relationships (5-against-6-against-7). Performers activate objects orbiting these shapes\, interpreting compound visual motion as sonic material. An outer ring generates OSC data driving spatial processing.\nThe score defines states\, behaviours\, and constraints; performers negotiate what these structures sound like. Each polygon side represents a discrete performance state—pitch region\, articulation\, texture—but specific mappings remain open. Musicians enter and withdraw from a shared texture whose density and pacing emerge from collective decision-making.\nAuthored entirely in SVG\, the work embeds performance semantics directly into visual element identifiers\, executed by a browser-based runtime on networked tablets. This approach\, detailed in the accompanying paper “Scores That Run: Graphic Notation with Embedded Performance Semantics\,” demonstrates how open web standards support animated notation without specialised infrastructure. Each performance traces a different route—music negotiated through shared encounter with a moving score. \nFull Guide to Interpretation\, Programme Notes and supporting materials including Supercollider live electronics patch are available online: \nhttps://robcanning.github.io/oscilla/compositions/polygonfield2026/ \nAbout the artist\nRob Canning (Dublin\, 1974) is a composer\, improviser\, and creative technologist whose work explores animated notation\, improvisation\, and the dynamics of networked musical systems. He holds a PhD in composition from Goldsmiths\, University of London\, where his research examined distributed authorship in computer-assisted music. A long-time advocate of Free and Open Source Software\, he develops Oscilla\, an open-source platform for animated graphic notation and networked performance. \n  \nDenis Polec: DEPRECATED  \nDEPRECATED establishes a recursive feedback loop between a biological subject and a cluster of interpretative algorithms. The work investigates the friction between human indeterminacy and machine determinism. \nThe Setup A lone performer occupies the center of the stage\, stripped of traditional instrumentation. Facing them is a “panopticon” of sensors: computer vision cameras and open microphones. The human subject oscillates between legible behavior and “abnormal” states—engaging in erratic gestures\, non-semantic vocalizations\, and visceral spasms designed to evade learned pattern recognition. \nThe Process Simultaneously\, three isolated AI instances dissect this input in real-time. Unable to process the chaotic reality of the “Now\,” the systems hallucinate: Computer Vision misinterprets trauma as choreography; a Large Language Model forces these errors into a coherent narrative; and Neural Audio Synthesis re-synthesizes the fabrication into sterilized perfection. \nAbout the artist\nDenis Polec operates at the intersection of sound art and algorithmic criticism. His practice rejects the notion of human-machine collaboration\, focusing instead on the friction\, latency\, and inherent violence of predictive systems. Polec constructs adversarial performance systems that expose the limitations of neural networks when confronted with the chaotic reality of the biological body. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/club-concert-3c/
LOCATION:Speicher am Kaufhauskanal\, Blohmstraße 22\, Hamburg\, 21079\, Germany
CATEGORIES:13-05,Club Concert,Music,Special Event
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260514T110000
DTEND;TZID=Europe/Amsterdam:20260514T173000
DTSTAMP:20260505T121343
CREATED:20260421T182814Z
LAST-MODIFIED:20260504T080608Z
UID:10000187-1778756400-1778779800@icmc2026.ligeti-zentrum.de
SUMMARY:Listening Room 2
DESCRIPTION:Fixed Media | Program Overview\nMakuta\nFelipe Otondo \nMechanization\nYu-Cheng Huang \nSpazio di accumulazione\nLeo Cicala \nThe Lament of Prince Hamlet\nChen Mu Hsi \nThe Throat of the Earth\nYe Peng \nTime Crystal Structure II\nHe Jing \nTriangle\nRay Fields \nUndercurrents\nAntonio Scarcia \n蜜蜂之后\nLia Su \nSuwol\nSeongah Shin \nFMVP!\nGuanjun Qin \n  \nAbout the pieces & artists\nFelipe Otondo: Makuta\nMakuta brings together Afro-Cuban and Congolese rhythms with electronic synthesis and field recordings made in Kenya and England. Composed and produced at the Arts and Technology Lab at Universidad Austral de Chile\, the piece unfolds gradually through shifting rhythmic and textural layers\, exploring how different sound materials can coexist and evolve within the same sonic space. \nAbout the artist\nFelipe Otondo is a Chilean composer and sound artist working with spatial audio\, soundscapes and electronic music. He has held research and teaching positions at institutions including the Technical University of Denmark and Lancaster University. His works — from radio pieces to sound installations — have been presented in more than thirty countries. He is currently Professor at Universidad Austral de Chile and Director of the Arts and Technology Lab. More information: www.otondo.net and www.soundlapse.net \n  \nYu-Cheng Huang: Mechanization\n“Mechanization” is a fixed medium work. Its conception stems from my long-standing reflection on the changing nature of performative freedom throughout the history of music. Beginning in the Renaissance\, performers were able to freely add ornamentation and adjust accompaniment patterns; interpretation itself involved a high degree of improvisation and indeterminacy. With the advent of the Romantic period\, performance remained personal and expressive\, yet dependence on notated conventions gradually intensified. In contemporary music\, nearly every aspect of performance—gestural actions\, dynamic variation\, timbral control\, and even detailed playing techniques—is precisely prescribed through explicit notation. Performers increasingly function as exact executors rather than free re-creators\, as if they were simply carrying out a set of instructions. This leads me to ask: When performance can only be executed by following instructions\, do we still need performers at all? Are performers gradually becoming machine-like? \nAbout the artist\nYu-Cheng Huang\, born in Taipei in 2001\, is currently pursuing a master’s degree at the Graduate Institute of Music\, National Yang Ming Chiao Tung University\, with a focus on contemporary music and multimedia composition. His creative work spans piano\, string instruments\, electronic music\, and environmental sound recording\, and explores the relationship between music and everyday experience. His compositional style is diverse\, often integrating narrative elements\, poetic sensibilities\, and sensory memory to create distinctive listening experiences and evocative emotional spaces. \n  \nLeo Cicala: Spazio di accumulazione\nSpazio di accumulazione [9′ 40″] (2025-26) The piece is constructed through processes of progressive accumulation of musical material: simple\, repetitive and initially functional cells are added one after the other without anything ever really being eliminated. Each new element arises as a necessity\, but over time loses its original role\, becoming weight\, background noise\, clutter. The accumulation does not lead to development\, but rather to a saturation of the sound space\, in which quantity defines meaning. Music thus becomes a metaphor for the compulsive accumulation of contemporary society: the desire for possession generates a continuous stratification that suffocates emptiness\, but also the possibility of listening\, choice and silence. \nAbout the artist\nLeo Cicala\, 1970 Composer\, acusmatic performer\, live performer\, teacher. He studied Instrumentation for Band at the Tito Schipa music conservatory of Lecce and graduated magna cum laude in Electronic Music at the same Institution; He also a degree in Biology. He has performed on the acousmonium more than two hundred works from the classic and contemporary electroacustic repertoire\, In 2015 he published the handbook entitled “Acousmatic Interpretation Manual” for Salatino musical edition\, and a series of related video tutorials can also be found on the web (www.acusma.it). In 2014 he published the cd “Rust” by the Apulian label “Art & classic” “; has released the cd “Punto di Accunulazione” for the label “ Creative Sources Recordings”and he also composed the soundtrack for the short film “Io sono qui” directed by Pierluigi Ferrandini and “ Storia di Valentina” directed by Antonio Palumbo. He has set up in Bari the association “ACUSMA Theater of sound” that encourages sound arts of research promoting activities of teaching\, pedagogy\, and music production. Winning the first prize in electroacoustic composition “Bangor Dylan Thomas Prize” in the UK\, his compositions are performed in festivals in Italy\, France\, Belgium\, Japan\, United Kingdom\, Germany\, Argentina\, Cyprus and in the USA. \n  \nChen Mu Hsi: The Lament of Prince Hamlet\n“The Tragic Lament of Prince Hamlet” is a mixed work for flute\, pre-recorded sound\, and live electronics\, with a duration of approximately nine minutes. The piece was composed during a period of emotional depression\, in which a pervasive sense of melancholy and pain rendered creative activity particularly difficult. During this time\, the figure of Prince Hamlet from Shakespeare’s Hamlet emerged as a central source of inspiration. Hamlet’s psychological condition—shaped by betrayal\, the pursuit of truth\, and the confrontation with existential dilemmas—resonated deeply with the composer’s own inner conflicts. Structured in four sections\, the work integrates flute performance\, fixed electronic sound\, and real-time audio processing to articulate a sonic narrative of emotional tension and transformation. Through the exploration of timbre\, texture\, and gesture\, the piece seeks not only to depict the psychological depth of the tragic character\, but also to reflect and project the composer’s lived experience. \nAbout the artist\nMu-Hsi Chen was born in 1998 in Taichung\, Taiwan. She began studying piano in 2003 and started composing through self-study in 2016. She is currently pursuing a degree in Electronic Music at the Institute of Music\, National Yang Ming Chiao Tung University\, under the supervision of Professor Yu-Chung Tseng. Chen is deeply interested in exploring the relationship between sound and its underlying narratives and philosophies. Her work often reflects a sensitivity to emotional states and inner experiences\, seeking to connect musical expression with personal and psychological dimensions\, and to offer a space where listeners may find resonance\, reflection\, or emotional solace. Her works have been performed at ICMC 2023 (International Computer Music Conference) and WOCMAT 2022 (International Workshop on Computer Music and Audio Technology). \n  \nYe Peng: The Throat of the Earth\nThis work recreates a complete Mongolian shamanic ritual through electronic music. Following the “slow-medium-fast” progression of the ceremony\, the music begins with throat singing and sounds of nature\, builds a trance-like dance atmosphere with ritual drum rhythms and animal calls\, and culminates in an intense rhythmic climax?forming an auditory entreaty for divine blessing. By fusing traditional elements with modern electronic soundscapes\, it creates an immersive ritual experience. \nAbout the artist\nPeng Ye. Master’s candidate in Composition (Class of 2025)\, Wuhan Conservatory of Music Student Member of the Electronic Music Society\, Chinese Musicians Association Primary research areas: electronic music composition\, integration of electronic music with visual media creation. \n  \nHe Jing: Time Crystal Structure II\nTime Crystal Structure II is a computer music Fixed media by the core concept of time crystals. It aims to translate the abstract physical notion of “breaking time-translation symmetry and exhibiting a periodically repeating structure” into perceptible auditory illusions through digital sound construction.Drawing on algorithmic composition technology\, the work deconstructs and reconstructs the periodic features of natural soundscapes such as tides and pendulums\, generating sound units with slightly variable parameters to simulate the ground-state motion of time crystals. Furthermore\, this piece is the second installment in the Time Crystal Structure series. \nAbout the artist\nHe Jing\, June 1989- He has been residing in Wuhan City\, Hubei Province\, China now. He serves as a faculty member of the Art and Technology major\, Department of Composition\, Wuhan Conservatory of Music\, and also works as a Master’s Supervisor. Graduated from the Graduate School of Showa University of Music\, Japan\, he has long been deeply engaged in the interdisciplinary field of art and technology\, and is committed to advancing the teaching and creative practice of computer music. His main research areas include AI music\, interactive music\, algorithmic composition\, film scoring\, etc. He has presented his works at the International Computer Music Conference (ICMC) on numerous occasions. \n  \nRay Fields: Triangle\nTriangle is an electronic composition in one movement. It is a musical exploration of the frequency profile of a triangle. A sound file of a struck triangle was deconstructed digitally\, manipulated\, and then assembled and composed. \nAbout the artist\nRay Fields has composed music for orchestra\, chamber ensembles\, choir\, dance\, the stage\, and film. His works have premiered on-line and at international music festivals\, Imani Winds Chamber Music Festivals\, DC New Music Conferences\, the Clarice Smith Performing Arts Center\, MilkBoy ArtHouse\, the University of Illinois\, Prince George’s Community College\, Stevenson University\, and the Children’s Discovery Museum in Acton\, Massachusetts. His liturgical works have been included in worship services in Kensington\, Maryland and Pittsburgh\, Pennsylvania. In addition to composing\, Ray Fields writes about music for academic publications. His book-length analysis of Morton Feldman’s Piano and String Quartet was published in August 2022 by Rowman and Littlefield. He has also written a chapter for a collection of essays honoring Feldman’s centenary to be published in 2026. \n  \nAntonio Scarcia: Undercurrents\nThis fixed-media work is based on a compositional framework that foregrounds the interaction between sound materials and musical gestures. Materials are conceived as sound entities carrying latent structural relationships\, which may remain compositional implicit while becoming perceptually manifest through their audible effects. Gestures function as temporal processes that articulate these relationships and shape the listener’s perception. The work integrates concrete and synthetic\, tonic and non-pitched sound materials\, organized through strategies of contrast and balance. Gestural processes are formalized as parametric sequences generated via a computer algebra system\, with sound synthesis realized in Csound. Although produced using digital technologies\, the compositional approach aligns with the tape studio tradition\, emphasizing detailed post-processing in both the time and frequency domains as an integral part of the compositional process. \nAbout the artist\nAntonio Scarcia received formal training in Electronic Engineering at the University of Padua\, Signal Processing at the University of Bari\, and Electronic Music at the Conservatory of Bari\, where he studied under Francesco Scagliola. He held various teaching positions the Conservatory of Genoa from 2011 to 2021 and served as Lecturer in Electroacoustic Composition at the Conservatory of Salerno during the 2022–2023 academic year. His artistic production focuses primarily on acousmatic music and has been presented within the programs of major events\, including various editions of the NYCEMF in 2022\, 2021\, and 2019; ICMC in 2014\, 2013\, 2012\, 2010\, and 2007; the North Carolina Computer Music Festival in 2008; SMC in 2012\, 2010\, and 2009; the Mantis Festival in 2010; CIM in 2018\, 2016\, 2014\, 2012\, and 2010; EMuFest in 2013\, 2012\, 2011\, and 2010; SICMF in 2013; ICSC in 2013\, 2022 and 2024; and the Musica Nova Competition\, where he received honorary mentions in 2016 and 2013\, and first prize in 2011. \n  \nLia Su: 蜜蜂之后\nThis work uses bees as a conceptual model to explore the contemporary technological ecosystem.The sound materials are taken from recordings of flutes\, dizi(Chinese flute)\, and ocarinas\, preserving the acoustic characteristics of these instruments and placing them in an electronic environment inspired by swarm behavior\, cybernetic systems\, and speculative soundscapes. The innovation lies in the recontextualization of sound\, placing acoustic instruments within an artificial sound ecosystem. \nAbout the artist\nLia Su’s work frequently deals with the relationship between listening\, music and space. Working across sound\, music and installation to explore ideas surrounding cross-cultural and post-digital identity. Lia has exhibited\, performed\, and presented work across a range of artistic contexts\, including 798 Art Zone\, Beijing; National Centre for the Performing Arts\, Beijing; Tree Museum\, Beijing; West Kowloon Cultural District\, Hongkong; New Taipei City Art Center\, Taiwan; IKLECTIK\, London; 5th Base Gallery\, London; Huddersfield Contemporary Music Festival\, Huddersfield; Cafe OTO\, London\, Bristol New Music\, among others. Lia is currently a PhD Candidate in Musical Composition at the University of Bristol\, having previously studied at the University of the Arts London. \n  \nSeongah Shin: Suwol \nI became drawn to the beauty of Jeju\, South Korea\, a volcanic island known for its strong winds and constantly shifting natural soundscape. As I spent more time there\, I became increasingly aware that the sounds of nature often echo artificial\, human-made sounds. I immersed myself in the sky\, the air\, and the movements—and sounds—of wind\, birds\, and insects. Rather than separating nature and humanity\, I developed my work toward an integrated auditory world\, focusing on new sonic environments created through the blending of field-recorded natural sounds and computer-generated sounds. \nAbout the artist\nComposer Seongah Shin works in the fields of contemporary music\, music for the performing arts\, and electronic music. She earned a Bachelor of Music in composition from Chugye University for the Arts\, a Master of Music in electronic music composition from the Peabody Institute of the Johns Hopkins University\, an MFA in sound design from the University of Missouri–Kansas City\, and a DMA in composition. She has held a sound designer residency with the Missouri Repertory Theatre and an artist residency at EMPAC at RPI. She created the MixMediaImprov. series and presented ten solo creative music concerts. In addition to collaborative projects such as the Thin Line Project\, she co-founded the Asia Computer Music Project(AMCP) and served as director for Asia/Oceania of the International Computer Music Association(ICMA). She is currently a professor of composition at Keimyung University\, Daegu\, South Korea. \n  \nGuanjun Qin: FMVP!\nFMVP! is an electroacoustic composition built entirely from the sampled sounds of basketball — the bounce\, the squeak of shoes\, the swish of the net\, and the roar of the crowd. Through sound transformation and spatial movement\, the piece narrates the emotional journey of an athlete: from doubt and criticism to determination\, and finally to victory. Dedicated to basketball legend Stephen Curry\, FMVP captures the rhythm\, intensity\, and inner monologue of a player striving to redefine limits. Each percussive impact becomes a heartbeat; each layered resonance a moment of resilience. The composition explores how athletic struggle and artistic creation share the same pulse — persistence\, precision\, and belief. \nAbout the artist\nChampion (Guanjun) Qin is an award-winning composer\, producer\, and topliner\, currently pursuing a PhD in Music Composition at the University of Bristol\, fully funded by the China Scholarship Council (CSC). His works have been performed\, awarded\, or officially selected at major international music and sound art festivals\, including the Denny Awards (USA & China)\, YoungLione*ss Festival (Italy)\, Futura Festival (France)\, and the International Computer Music Conference (ICMC). Champion’s creative practice bridges electroacoustic composition and popular music production\, exploring the intersection of sound design\, cross-cultural aesthetics\, and narrative expression. He has collaborated with and composed music for renowned artists such as Jackson Wang\, a member of GOT7\, one of Asia’s most influential K-pop groups. His production work also extends to film and television\, including the acclaimed animated series GG BOND\, which drew over 50 million viewers in its first week of broadcast. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/listening-room-2-4/
LOCATION:Hamburg University of Technology\, Building A (A 0.14)\, Am Schwarzenberg-Campus 1\, Hamburg\, 21073\, Germany
CATEGORIES:14-05,Listening Room,Music
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260514T110000
DTEND;TZID=Europe/Amsterdam:20260514T173000
DTSTAMP:20260505T121343
CREATED:20260421T185520Z
LAST-MODIFIED:20260504T084611Z
UID:10000182-1778756400-1778779800@icmc2026.ligeti-zentrum.de
SUMMARY:Listening Room 1
DESCRIPTION:Fixed Media | Program Overview\nMotes of Time\nYuming Sun \nEaves Verse\nShunhang Huang \nFulgore \nTakeyoshi Mori \nFusion of Horizons\nChi Wang \nIntertwine\nJohn Thompson \nNeon Reverie (ver. 2)\nWoon Seung Yeo and Ji Won Yoon \nStellar Vibrato\nXingle Zhang \nVibe Higher (ver. 3)\nJi Won Yoon and Woon Seung Yeo \nWaves\nBike Öner \nOf Clouds and Clocks\nTom Williams \nA little history of mobile music Vol. 1\nGenoël von Lilienstern \n  \nAbout the pieces & artists\nYuming Sun: Motes of Time\nAn experimental electronic work created for data visualization. Motes of Time translates the abstract flow of time into a textured world of digital decay and sonic particles. A synchronized study of sight and sound. \nAbout the artist\nDr. Sun Yuming is a composer\, music producer\, associate professor and master’s supervisor at the Central Conservatory of Music (CCOM). He studied under Professor Li Xiaobing\, Director of the Department of AI Music and Music Information Technology. Sun began his studies at CCOM in 2008\, where he was admitted to the graduate program with distinction and later pursued his Ph.D. in 2020. Upon graduating in 2023\, he joined the faculty. Sun has received numerous honors\, including the National Graduate Scholarship\, the China Telecom Scholarship\, and the Soong Ching Ling Foundation-Gucci Music Fund. He was also recognized as an Outstanding Graduate of Beijing and honored as an Outstanding Contributor to the 2022 Beijing Winter Olympics and Paralympics. His compositions have won first prizes in the 7th and 11th Musicacoustica-Beijing Composition Competitions and the Oskar Kolberg Electronic Music Composition Special Competition. Active in large-scale productions\, Sun has served as music director for various CCTV programs\, including Charity Night\, the Shining Names award ceremonies\, the Safe Travels special for National Traffic Safety Day\, and Landmarks of Chinese Civilization. His works are widely broadcast across major media platforms; notably\, his piece “The Ship” was featured on the hit show I Am a Singer in 2019. \n  \nShunhang Huang: Eaves Verse\nEaves Verse is an interactive audio-visual performance created with Media Pipe and AI-powered real-time image generation. It is centered around the cultural symbol of the eaves in traditional Chinese architecture. The performance begins with the creator’s intent and uses hand gestures as the interactive medium. Using AI technology for real-time image generation\, the performance presents a three-tiered progression: abstract concepts\, AI-constructed concrete visuals\, and an aesthetic realm where reality and illusion intertwine. Lingering melodies of Eastern resonance intertwine naturally with the cadence of spoken phrases\, and the texture of electronic rhythms resonates synchronously with interactive movements. This allows traditional sonic rhythms to flow within a contemporary interactive context\, evoking an Eastern auditory essence where reality and illusion merge. In this work\, AI transcends its role as a technical tool to become a “symbiotic partner” attuned to the creator’s intent. AI generates visuals in real time in response to gestures\, transforming thoughts into tangible forms. Through this novel interactive format\, it reconstructs contemporary expressions of traditional aesthetics\, ultimately crafting an audiovisual experience that embodies Eastern philosophy’s interdependence of the tangible and intangible. \nAbout the artist\nShunhang Huang\, a 2023 undergraduate student in the Department of Music Engineering at Zhejiang Conservatory of Music\, specializes in Art and Technology with a focus on interactive music\, data-driven instruments\, and real-time audiovisual creation. His works have received awards at the China College Students Computer Design Competition\, Huichuang Youth Competition\, and Danny Awards International Electronic Music Competition. Multiple interactive audiovisual installations created by him have been exhibited at the Global Digital Trade Expo\, World Internet Conference\, National Conference on Sound and Music Technology\, and Hangzhou International Music Performance Industry Expo. \n  \nTakeyoshi Mori: Fulgore \nThe initial inspiration for this work came from a brief\, ten-second video fragment capturing the ripples formed by raindrops falling onto a glass ceiling\, shimmering and flickering like a luminous mosaic. The subtle yet complex expressions of light observed in this phenomenon became the point of departure from which the overall concept of the work gradually emerged. In this piece\, video materials drawn from everyday scenes and natural landscapes are used. By enhancing their chromatic contrasts\, the work seeks to reveal the rich diversity of “modes of light” that exist within our daily lives but often go unnoticed\, and to encourage a heightened awareness of the perceptual and emotional dimensions of light. The title\, Fulgore\, is derived from the Latin word meaning “radiant brilliance” or “that which shines brightly.” The sonic component of the work is constructed through the multilayered deployment of diverse pulse-based temporal structures and sustained\, drone-like spectral layers. Although the musical materials themselves are relatively simple\, the intention is to create an acoustic space that complements the visual concept and resonates with the shifting expressions of light. The original version of the piece was produced in a 7.1.4-channel immersive audio format\, further expanding the spatial dimension of the work and integrating spatial motion as an integral compositional parameter. \nAbout the artist\nTakeyoshi Mori. Composer\, sound designer\, and researcher in electroacoustic music. His works include multichannel sound compositions\, live electronics\, and visual music\, and have been presented at numerous international music festivals and research-oriented events. In recent years\, he has been actively engaged in international educational activities\, including inter-college exchange concerts in East Asia\, serving as a jury member for international competitions\, and organizing workshops and lectures. He is currently Director of the Laboratory of Advanced Music Production and Professor and Co-director in the Music Design Course at Senzoku Gakuen College of Music. \n  \nChi Wang: Fusion of Horizons\nFusion of Horizons in Hans-Georg Gadamer’s philosophical framework involves merging different frames of understanding to achieve a deeper comprehension. Frames of understanding are conceptual perspectives that shape how we interpret and make sense of information\, providing context for our experiences\, observations\, and data. In this composition\, the performer utilizes the Nintendo Ring-Con with a Joy-Con as a symbolic frame\, guiding us through various segments of visual and sonic narratives. This instrument not only serves as a literal tool but also represents a metaphorical frame through which different perspectives and interpretations are explored. Through its use\, the performance embodies the process of merging diverse frames of reference\, reflecting the idea of achieving a richer\, more integrated understanding. \nAbout the artist\nChi Wang is a composer and performer of electroacoustic music whose work explores sound design\, data-driven instrument creation\, composition\, and performance. Her music has been presented internationally at venues and conferences including the International Computer Music Conference\, New Interfaces for Musical Expression\, Musicacoustica-Beijing\, SEAMUS\, NYCEMF\, Kyma International Sound Symposia\, and Electronic Music Midwest. Her works have received numerous honors\, including selections for SEAMUS CDs\, Best Composition from the Americas (ICMC)\, the Pauline Oliveros New Genre Prize (IAWM)\, Prix CIME\, an Award of Distinction from MA/IN Festival\, and finalist recognition at the Guthman Musical Instrument Competition. Chi has served as a judge for major international electronic music competitions and is an active translator of electronic music texts. She holds a D.M.A. from the University of Oregon and is currently Associate Professor of Music (Electronic and Computer Music) at the Indiana University Jacobs School of Music. \n  \nJohn Thompson: Intertwine\nIntertwine is an audiovisual work that features tight coupling of the audio and visual elements. It draws inspiration from Lance Putnam’s “Sphase” from 2007\, which examines relationships of timbre to visual design. With a strong focus on pulse and rhythm\, the work gives sonic nods to Norman McLaren’s visual music works of the 1940’s. Intertwine draws inspiration from Lance Putnam’s “S Phase” from 2007. In this work\, Putnam is directly driving the visual material from the sonic material using 3D Lissajous curves\, where the amplitudes of audio signals are used to inform the positions of (x\,y\,z) vertices in a virtual space. In the documentation of “S Phase”\, Putnam supplies a diagram of the process. In the experimentation stage of the work “Intertwine”\, the process was reconstructed\, minus a few elements that treat the sound (detoning\, chorusing). As the work progressed\, the sound sources expanded to include concrete elements and drum machine. The Strudel live coding environment was used to pattern the drum machine. Numerous performance parameters of the drum machine were exposed for real time control along with parameters of the live coding environment and the audiovisual system itself. The controls were consolidated to a single controller allowing for interactive real time performance. The system can be performed live and can yield many different results. What is presented here is an edited recording of a performance. The form of the work looks to balance free-flowing and repetitious elements\, as well as sections of varying tempos and densities. \nAbout the artist\nJohn Thompson‘s music uses sound and image as a vehicle for expressing the beauty and complexity of the world. His compositions over the last decade have focused on audiovisual works and works for instrument and electronics. John is Professor of Music and Head of the Music Technology Program at the Gretsch School of Music at Georgia Southern University. He is an enthusiastic educator who has had the pleasure of sharing his passion for music and technology with students for almost two decades. He is dedicated to helping students develop their own creative voices and encouraging them to deeply consider the intersection of technology and music. \n  \nWoon Seung Yeo and Ji Won Yoon: Neon Reverie (ver. 2)\nNeon Reverie (ver. 2) is an audiovisual exploration of the composer’s old memories. More specifically\, it was conceived based on the story of “walking through the streets of the past bathed in neon light\,” aiming to musically present a short journey that captures reflections on the passage of time\, the contrast between past and present\, the fragility of existence\, the vague boundaries between reality and delusion\, and raw emotions to evoke introspection and wonder. The piece starts with human footsteps\, representing the imaginary stroll through the memory under neon lights. This intro is followed by a variety of sonic elements that symbolize fragments of memories related to nostalgic moments gone forever. In addition\, the rhythmic patterns weave an underlying emotional thread throughout the piece. Visuals of the piece act as a counterpart to the music\, carefully composed to create a dialogue between auditory and visual domains. Regarding the cinematography\, one of the most noticeable aspects is the revolving movement of numerous concentric\, dotted rings that correspond to the rhythmic progression. In addition\, the overall color scheme is neon-inspired to match the composer’s original idea of neon lights. Meticulously chosen visual effects are also applied to evoke the hazy\, unclear feelings aroused by the sonic texture of the music. \nAbout the artists\nMusic: Ji Won Yoon (composer) is active as a composer of both acoustic and electric music. She is interested in artistic applications and realizations of various computer music technologies\, emphasizing multi-modality with sound at the center. She earned her B.A. and M.A. degrees in Music (Composition) from Yonsei University\, completed doctoral courses in Computer Music Composition at Dongguk University\, and studied at the Center for Computer Research in Music and Acoustics (CCRMA)\, Stanford University as a visiting researcher. Currently she is an assistant professor at the Department of Applied Music and Sound\, Keimyung University. \nVisuals: Woon Seung Yeo (visual artist) is a bassist\, media artist\, and computer music researcher. He is a professor at Ewha Womans University\, Seoul\, Korea\, and leads the Audio and Interactive Media (AIM) Lab. Dr. Yeo has received B.S. and M.S. degrees in Electrical Engineering from Seoul National University\, M.S. in Media Arts and Technology from University of California at Santa Barbara\, and M.A. and Ph.D. in Music from Stanford University. His research interests include audiovisual art\, cross-modal display\, musical interfaces\, mobile media\, and audio DSP. Results of his research are commonly shared by exhibitions and performances in the public interest. \n  \nXingle Zhang: Stellar Vibrato\nThis fixed-media work is inspired by the Hun Tian (“Celestial Sphere”) cosmology of Eastern Han astronomer Zhang Heng. In his vision\, the universe forms a resonant whole in which celestial motion and earthly life subtly correspond. Combining algorithmic sound synthesis with a particle-based visual system in Max/MSP\, the piece evokes a meditative encounter with ancient reflections on cosmic order. It positions audiovisual media as a bridge between eras\, allowing contemporary listeners to resonate with a millennia-old desire to understand the universe—a pursuit whose echoes transcend time. \nAbout the artist\nXingle Zhang (b. November 27\, 2000) is a graduate student in computer music composition at the Wuhan Conservatory of Music. \n  \nJi Won Yoon and Woon Seung Yeo: Vibe Higher (ver. 3)\nMagnetic resonance imaging (MRI) devices produce unique sounds during examinations. Usually considered as unwanted noise\, almost every MRI subject wears ear protection gear to minimize discomfort. However\, despite the effort\, it is virtually impossible not to hear it\, as it is too loud for most people to endure. “Vibe Higher (ver. 3)” began with the composer’s interest in the timbre and specific rhythmic patterns of typical MRI sounds\, which later led to an understanding of MRI equipment’s working mechanism. Then\, by imagining what happens inside the device through recognizable\, step-by-step changes in the sonic (primarily rhythmic) elements\, the composer gives musical meaning to the device’s internal process. In addition\, MRI conceptually involves a sensory transfer process that obtains images from transcribing reflected magnetic waves; this itself provides an ideal metaphor for an audiovisual piece. For the music of this piece\, sounds generated during the operation of various MRI machines were collected\, processed using digital audio effects such as pitch shifting\, time stretching\, and reverb\, and then utilized as its sonic material. While MRI sounds generally have a similar tone and pattern at a certain level\, we can hardly say that every device’s sound is entirely the same; this provides subtle but significant variety to the song’s sonic texture. To create the visuals\, we first used Processing to generate multiple versions of raw clips\, then combined selected segments to match their sonic counterparts. Combinations of simple figures\, such as triangles or squares\, are repeatedly arranged in a tile pattern to visually express the mechanical\, standardized feeling of sound\, presenting a kaleidoscopic layout. As the music progresses\, the shape of each tile continuously morphs\, and the overall visual composition and color scheme transform to convey the corresponding sonic mood into the optical domain. In addition\, it features a variety of visual filters that not only evoke the piece’s sonic impression in the visual domain but also create an atmosphere reminiscent of early-20th-century abstract films. \nAbout the artists\nMusic: Ji Won Yoon (composer) is active as a composer of both acoustic and electric music. She is interested in artistic applications and realizations of various computer music technologies\, emphasizing multi-modality with sound at the center. She earned her B.A. and M.A. degrees in Music (Composition) from Yonsei University\, completed doctoral courses in Computer Music Composition at Dongguk University\, and studied at the Center for Computer Research in Music and Acoustics (CCRMA)\, Stanford University as a visiting researcher. Currently she is an assistant professor at the Department of Applied Music and Sound\, Keimyung University. \nVisuals: Woon Seung Yeo (visual artist) is a bassist\, media artist\, and computer music researcher. He is a professor at Ewha Womans University\, Seoul\, Korea\, and leads the Audio and Interactive Media (AIM) Lab. Dr. Yeo has received B.S. and M.S. degrees in Electrical Engineering from Seoul National University\, M.S. in Media Arts and Technology from University of California at Santa Barbara\, and M.A. and Ph.D. in Music from Stanford University. His research interests include audiovisual art\, cross-modal display\, musical interfaces\, mobile media\, and audio DSP. Results of his research are commonly shared by exhibitions and performances in the public interest. \n  \nBike Öner: Waves\nWaves is built on the dialectic of compression and release across electronic processing\, spatialization\, and form. Inspired by the physical behavior of waves\, the piece reflects cycles of compression and rarefaction that accumulate into the ocean itself. Granular processing\, time-stretching\, and spectral diffusion of the cello create dense zones of tension that abruptly dissolve into spatial release. The form mirrors this cycle\, as acoustic and electronic layers interact to shape a dynamic\, living sonic space. \nAbout the artist\nBike Öner is a Turkish composer and cellist from Istanbul\, working in acoustic and electroacoustic music. Her work creates mixed-reality soundscapes that combine Eastern and Western tuning systems\, diverse timbres\, and cross-cultural textures\, exploring proximity\, memory\, transformation\, and emotional co-presence in space. She studied cello at UdK Berlin and Istanbul University State Conservatory\, earned an MM in Composition from ITU MIAM in 2022\, and is pursuing a DMA at UT Austin\, where she teaches in EEMS. \n  \nTom Williams: Of Clouds and Clocks\nThis acousmatic composition explores the conceptual tension between orderly determinism and the irregular\, indeterminate realm of ‘clouds\,’ drawing on the philosopher Karl Popper’s metaphor of ‘clouds and clocks’ and its musical reimagining in Ligeti’s ‘Clocks and Clouds’ (1973). Translation operates as a guiding principle—moving between philosophical constructs and sonic experience\, and transforming the mechanical precision of a clock into diffuse\, spatially articulated sound structures. The work unfolds in a single movement articulated in two contrasting parts: an opening section dominated by cloud-like textures—formed through processes of granulation and spectral diffusion—followed by a second part where clock-derived materials assert greater clarity and rhythmic presence. The town clock in New Bern\, North Carolina\, provides a recognisable point of reference within a sound world otherwise shaped by obscured metallic and environmental resonances\, their identities dissolved into atmospheric ambiguity. Through this interplay\, the piece proposes an experiential continuum where temporal precision and sonic indeterminacy coexist\, foregrounding the creative potential of imperfect translation. \nAbout the artist\nTom Williams is a composer whose work spans both acoustic and electroacoustic music. His compositions have been performed internationally\, most recently at ICMC2025 Boston\, and NYCEMF2025. He has received awards including the Italian music medal “Città di Udine”\, British Composers’ nomination in Sound Art\, and his acousmatic piece “Piano Trace” received a special prize at the 4th Ise-Shima International Composition Competition. He has collaborated with renowned musicians such as New York cellist Madeleine Shapiro\, soprano Juliana Janes Yaffé\, bass clarinettist Sarah Watts\, percussionist Thierry Miroglio\, and Orchestra of the Swan. He has a doctorate in music composition from Boston University. Currently\, he is course director for the MA programme in music production at Coventry University. \n  \nGenoël von Lilienstern: A little history of mobile music Vol. 1\nThe piece engages with the complex relationship between humans\, art\, and artificial intelligence. Conceived as a transhumanist comedy\, it is created using generative video techniques and reflects on how human imagination and technological\, standardized ways of narrating intersect in contemporary artistic production. The work experiments with the interplay between image and sound\, questioning how meaning emerges from their shifting relations. Sound is not treated as a secondary element\, but as an equal partner to the visual\, unfolding within the cinematic 7.1 surround space. This spatial setup is understood as an electroacoustic laboratory\, where perception\, technology\, and narration are continuously reconfigured. Through humor and speculation\, the piece invites audiences to question AI-based and -related storytelling. \nAbout the artist\nGenoël von Lilienstern (b. 1979) is a composer of instrumental and electronic music. Former head of the Studio for Electroacoustic Music at the AdK Berlin and a doctoral candidate in multimedia composition at the HfMT. Performances by renowned ensembles such as Ensemble Intercontemporain and SWR Orchestra. Fellow of Villa Aurora and the Cité des Arts (Paris). Co-founder of the ktonal group for AI-related audio generation\, and co-curator of the Untwelving Festival for xenharmonic music\, Munich. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/listening-room-1-4/
LOCATION:Hamburg University of Technology\, Building A (A 0.18)\, Am Schwarzenberg-Campus 1\, Hamburg\, 21073\, Germany
CATEGORIES:14-05,Listening Room,Music
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260514T120000
DTEND;TZID=Europe/Amsterdam:20260514T140000
DTSTAMP:20260505T121343
CREATED:20260415T122311Z
LAST-MODIFIED:20260417T115252Z
UID:10000122-1778760000-1778767200@icmc2026.ligeti-zentrum.de
SUMMARY:[Off-ICMC] Rehearsal & Concert Visit for Families
DESCRIPTION:Photo: Max Henschel\n  \nWhat does contemporary music sound like? What happens during the rehearsals? And what challenges might occur? We’ll look into these questions during rehearsal and concert visit for families.  \nFor families with children aged 7+\nRegistration required here \n  \nThe Off-ICMC\nMusic is what brings us together\, even when everything else pulls us apart.\nMusic everywhere—it is part of our everyday lives. And yet\, we’re hearing it performed live on analog instruments less and less. Instead\, it often reaches us through speakers or headphones\, as files\, from the cloud. What does music mean to you? What does it sound like today? Where does it begin—and where does it end?\nThe ligeti center invites you to listen more closely and discover new sounds—to explore\, experiment\, and play. This year\, ICMC HAMBURG 2026 revives an old tradition: the Off-ICMC\, a free and accompanying festival curated for the general public and anyone curious about computer music. \nAll Off-ICMC events are free of charge.  \n\n  \n \n 
URL:http://icmc2026.ligeti-zentrum.de/event/off-icmc-rehearsal-concert-visit-for-families/
LOCATION:Hamburg University of Technology (TUHH)\, Am Schwarzenberg-Campus 5\, Hamburg\, 21073\, Germany
CATEGORIES:14-05,Music,Off-ICMC
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260514T133000
DTEND;TZID=Europe/Amsterdam:20260514T150000
DTSTAMP:20260505T121343
CREATED:20260421T162627Z
LAST-MODIFIED:20260504T224853Z
UID:10000093-1778765400-1778770800@icmc2026.ligeti-zentrum.de
SUMMARY:Lunch Concert 4A
DESCRIPTION:Concert 4A marks a special moment of collaboration between Hamburg’s local music scene and international composers. A particular highlight are two world premieres written especially for the renowned Hamburg-based double bassist John Eckhardt. Known for his explorations at the boundaries between new music and sound art\, Eckhardt here pushes the sonic extremes of his instrument in dialogue with the computer.\nAlongside the focus on the double bass\, the audience can expect a journey ranging from “electroacoustic romanticism” to AI-driven violin improvisations. \nThis Lunch Concert is open to the public. Those without a conference pass can purchase a ticket here. \n  \nProgram Overview\nThe Water lily in the blaze\nNatsuki Kambe \nconfim\, assim\, sem fim\nRodrigo Pascale \nULYSSES II\nRoberto Cipollina and Eleonora Podestà \nThe Week\nHenrik von Coler \nEmpress Luo\nYao Hsiao \nLa Nuit Bleue\nZhixin Xu and Yunze Mu \nResonant Thresholds\nCecilia Suhr \nX6 – Hexaphonic Spatialized Guitar\nFrancesco Perissi and Giovanni Magaglio\n\nAbout the pieces & artists\nNatsuki Kambe: The Water lily in the blaze \nThis work for double bass and live computer electronics explores the wide range and rich timbral possibilities of the instrument through real-time signal processing in Max. Combining the powerful energy of the low register with the delicate beauty of flageolet harmonics in the high register\, the piece evokes the poetic image of water lilies glowing in a blazing sunset. This work was composed to explore the wide range and rich timbral possibilities of the contrabass. In addition to the instrument’s inherent variety of tone colors\, the composer further expands its sonic potential through live electronics. The low register conveys a powerful\, flame-like energy\, while the high register\, produced through flageolet harmonics\, possesses a delicate beauty reminiscent of water lilies. These contrasting elements are brought together into a single poetic image: a burning sunset reflected on a pond\, with water lilies blooming in its shadow.\nFor the electronic component\, the composer used TRLib\, a Max object library for the realization of interactive computer music developed by Takayuki Rai. Throughout the piece\, grbFM\, which realizes granular sampling techniques in real time\, is employed extensively: in the low register\, it generates noise-based textures\, including quarter-tone inflections\, while in the high register\, it creates chordal sonorities inspired by the Japanese traditional wind instrument shō.\nAbout the artist\nNatsuki Kambe was born in 2004 in Yokohama\, Japan. They began studying piano at the age of five and started composition studies with Kazuo Mise at the age of fifteen. In 2020\, she graduated from the Music Department of Toho Girls’ High School.\nIn the same year\, they entered Toho Gakuen College of Music as a composition major and are currently a third-year student (as of January 2026). Since April 2024\, she has been studying computer music under Takayuki Rai. \n  \nRodrigo Pascale: confim\, assim\, sem fim\n“confim\, assim\, sem fim” was composed in 2024 during the Laboratorio de Composición Mixta of Resonancias Iberoamericanas. It is dedicated to the Festival Expresiones Contemporáneas and to Francisco. This composition explores the concept of infinity within limited systems.\nThe pre-compositional research involved extensive explorations of harmonies based on mathematical ratios. I established a structure featuring 15 harmonies\, beginning with two frequencies at a ratio of 16/15. Each subsequent harmony added a new frequency derived from the initial ratio\, multiplied by a series of ratios following the sequence [16/15\, 15/14\, 14/13\, 13/12\, 12/11\, 11/10\, 10/9\, 9/8\, 8/7\, 7/6\, 6/5\, 5/4\, 4/3\, 3/2\, 2/1]. Notably\, some harmonies—including the second—utilized this sequence in reverse. For instance\, the ratio [15/14] was employed as the foundation for the first two frequencies\, while the third harmony emerged from multiplying [15/14] by [16/15]\, yielding [8/7].\nThe forward sequence often led to more dissonant harmonies\, while the backward sequence inclined towards consonance\, and I frequently juxtaposed the two. An exception occurred between harmonies 13 and 14\, where both utilized forward sequences to create heightened tension\, concluding in a consonant 15th harmony. The sequence employs a set of regressive numbers\, each divided by its preceding integer. This approach allows for the potential to extend beyond 2/1 to 1/0\, thus engaging with a well-known mathematical problem. As the results of division increase when the denominator decreases\, division by zero is said to “tend to infinity.”\nIn this exploration\, I realized that the logical conclusion of the composition was to approach infinity musically. However\, I confronted the challenge that the double bass can only produce a finite range of sounds\, and that the human hearing spans approximately from 20 Hz to 20 kHz. Faced with this problem\, I sought solutions that transcended the confines of the system itself. This led me to investigate how the limitations of our auditory perception could be brought to the forefront\, creating illusions of seemingly ever-rising glissandi and of rhythm turning to pitch. The transformation of percussive sounds into frequencies and the use of Shepard tones played a crucial role in this composition.\nconfim\, assim\, sem fim delves into the boundaries of auditory perception\, aiming to investigate the concept of infinity within limited systems. This composition begins with a sequence of harmonies\, where subtle facets of infinty aer explored through the techniques of the double bass. In its culminating section\, the work unveils the full potential of this exploration by incorporating exceptionally high frequencies and an enduring reverberation\, creating an immersive sonic landscape that invites listeners to experience the infinity within these media. \nAbout the artist\nRodrigo Pascale (b. 1996) is an internationally awarded Brazilian composer whose works have been performed worldwide by leading ensembles including JACK Quartet\, ICE\, MCME\, Splinter Reeds\, loadbang\, Hypercube\, Hinge\, and Sound Icon. A Prix CIME 2025 recipient and Gaudeamus Award 2026 Finalist\, he is pursuing a DMA at Peabody and has studied with Haas\, Kampela\, Fineberg\, Wubbels\, and Hersch. \n  \nRoberto Cipollina & Eleonora Podestà: ULYSSES II\nUlysses 2 is a project conceived by composer Roberto Cipollina. The work serves both as a performative and technological exploration of real-time performer-machine interaction\, emphasizing the role of AI not as a passive tool\, but as an active and adaptive musical agent within the creative process.\nThe work is conceived as a closed-form improvisational structure for acoustic instrument and real-time interactive electronics\, developed specifically to explore the creative potential of artificial intelligence in relation to the performer’s improvisation.\nAt the core of Ulysses 2 is the integration of Somax2\, a real-time generative system developed within the Max environment\, which enables responsive electronic behavior through the analysis and transformation of live performance data.\nWhile the project fully embraces aleatory elements and the concept of extemporaneity\, it also adheres to an organized formal structure that guides its overall development. In fact\, the performer engages with a series of prompts provided by the composer\, ensuring a coherent trajectory.\nThe electronic component\, built from a database of sampled sounds recorded by Eleonora Sofia Podestà\, responds and adapt to the performer’s expressive gestures in real-time. Through Somax2’s processing\, the system generates musically congruent textures and transformations.\nThis piece highlights the software’s ability to translate performance parameters into musically coherent electronic answers\, fostering a dynamic and co-creative dialogue between human performer and machine intelligence. \nAbout the artists\nRoberto Maria Cipollina is a composer and researcher in immersive technologies applied to music\, whose works have been performed across Europe and America. His compositions include A Lover’s Tale (2018)\, Alchimie (2020)\, Lu Re d’Amuri (2022)\, and Al-Qantarah (2024). Author of two musicological books and lecturer on palazzi della memoria in music\, artificial intelligence\, and virtual reality\, his works are internationally performed and published by Da Vinci Records. \nEleonora Podestà \n  \nHenrik von Coler: The Week\nOne Week is an acousmatic composition that integrates a staged reading in live performance. Drawing on an introspective autobiographical text\, it reflects on emotional states and personal experiences during periods of transition and uncertainty. The work may be understood as a form of Electroacoustic Romanticism: in line with the 2026 ICMC theme\, One Week translates romantic ideas into the language of electroacoustic music. In doing so\, it explores a balance between technological investigation and personal expressivity. At the same time\, the piece seeks to reach a broader range of listeners by foregrounding emotional engagement and incorporating a contemporary text that resonates with present-day cultural contexts. \nThe tape part of One Week is constructed from autobiographical field recordings combined with analog signal processing and experimental sound synthesis. In addition to conventional contemporary techniques\, the production draws on echo chambers\, analog and digital tape machines\, and vintage synthesizers and effects units. This process produces dense\, noisy\, and organic timbres and textures while consciously engaging with recognizable tropes of acousmatic music. During performance\, the tape part is live-diffused by the composer. Delivered in Ambisonics (up to seventh order)\, the work can be realized on a wide range of spatial sound systems\, in both 2D and 3D configurations. \nThe staged reading is performed by a musician and multimedia artist zl!ster\, who collaborated closely with the composer to refine the original text for performance. Through this revision\, the text is reshaped for the present moment while remaining anchored in the work’s autobiographical framework. \nAbout the artist\nPerformer: zl!ster is a Panamanian-American artist based out of Atlanta\, Georgia. His music embodies self-exploration through misinterpretations and exaggerations of real life. At times\, his work is a direct reflection of self; at others\, it is distorted\, shaped more by perception than reality. Rooted in curiosity and at times bravado\, his music lives in the realms of alternative rap and indie rock. \nComposer: Henrik von Coler is a musician and researcher\, working at the intersection of art\, science and technology. In 2024 he founded the Lab for Interaction and Immersion (L42i) at Georgia Tech’s School of Music. Before that he was the director of the Electronic Music Studio at TU Berlin and head of the Computer Music Team at the Audio Communication Group. In his research and creative work\, Henrik has explored various aspects of electronic music and musical instruments. This includes interface design\, algorithms for sound generation and experimental concepts for composition and performance. Most of his projects treat space as an integral part of music. In 2017 he founded the Electronic Orchestra Charlottenburg – an ensemble of up to 12 electronic musicians – to explore music interaction on immersive loudspeaker systems. He has since worked on ways to enhance how musicians and audiences experience spatial music and sound art. \n  \nYao Hsiao: Empress Luo\nEmpress Luo is a mixed-media electroacoustic composition inspired by the historical and literary figure Zhen Mi\, whose image is intertwined with the Luo River Goddess depicted in Cao Zhi’s poetic work Ode to the Nymph of the Luo River. Drawing from Peking opera traditions associated with this narrative\, the piece explores themes of political power\, gendered violence\, and silenced agency through an integration of live voice\, processed sound\, and gestural control.\nThe work incorporates melodic and expressive references to the Peking opera Ode to the Nymph of the Luo River\, particularly scenes associated with the guqin and the lament Eighteen Songs of a Nomad Flute\, historically attributed to Cai Wenji. These materials are recontextualized within an electroacoustic framework to highlight parallels between women whose lives were shaped by forced displacement\, marriage\, and warfare.\nA Wacom tablet is employed as a gestural controller\, functioning both as a performative interface and as a symbolic extension of the protagonist’s corporeal presence. Through touch-based interaction\, the performer shapes selected sonic parameters in real time\, evoking both the physical posture of guqin performance and the imagined divine authority of the Luo River Goddess. This interface mediates control between the performer and the system\, reflecting fluctuating degrees of autonomy and constraint.\nThe sonic structure combines live vocal performance with pre-recorded and live-triggered audio materials. At times\, the voice assumes a dominant role; at others\, it is fragmented\, processed\, or submerged within the playback system. This dynamic relationship mirrors the tension between personal expression and externally imposed forces\, suggesting a trajectory from presence and agency toward erasure.\nTextual fragments drawn from classical Chinese poetry appear within the work\, referencing fraternal conflict\, political rivalry\, and lamentation. Through the interaction of voice\, gesture\, and electronic sound\, Empress Luo reflects on historical narratives of loss and power while reimagining them within a contemporary performance context. \nAbout the artist\nYao Hsiao is a performer-composer and voice artist from Taiwan\, specializing in music\, theater\, and multimedia art. They are the First Prize winner of the 2025 SEAMUS Student Commission Competition for Daiyu\, and a finalist in the 2024 Sweetwater/SEAMUS Student Commission Competition for Consort Yu. Hsiao has performed at international festivals including NIME\, SEAMUS\, ICMC\, NYCEMF\, MOXsonic\, EMM\, SPLICE\, CampGround\, and Click Fest.\nThey hold a Master of Music in Composition from Indiana University and are pursuing a Ph.D. in Data-driven Music Performance and Composition at the University of Oregon under Jeffrey Stolet\, where they also serve as a Graduate Employee.\nInspired by literature—from Western poetry to Chinese verse and Japanese haikus—Hsiao creates interdisciplinary works that blend traditional vocal techniques\, Peking and Yue Opera\, and Chinese dance with live electronics\, reflecting their cross-cultural and technological vision. \n  \nZhixin Xu and Yunze Mu: La Nuit Bleue\nLa nuit bleue is a piece written for solo harpsichord and live electronics. After three years of harpsichord study\, I had a strong thought in my mind that write a piece for harpsichord and live electronics. After the spectral analysis of the harpsichord sound as well as look through some pieces like Saariaho’s Jardin Secret II and Cage’s HPSCHD\, I realized that live spectral processing of this kind of idiophonic sound would be a big challenge because of the broad frequency distribution in spectrum. So\, I decided to use both fixed sounds and live processed sounds in the electronic part. Jardin Secret II and HPSCHD inspired me a lot while looking for sounds for electronics. Both of them contain noisy and glitchy sound in the tape part which are homogenies to harpsichord sound in some aspect\, although somehow radical for the time they were composed\, they worked well for harpsichord sound. With this idea\, I set the tone of the timbral character for this piece. \nAbout the artist\nZhixin Xu is a composer\, sound artist and computer music researcher based in Shanghai\, China. His compositions often involving electronics\, sometimes generated by the software he develops. Much of his recent music has been focused on exploring how purely computer-generated sound materials can be used along with musical instruments and purely acoustic sounds. His music and multimedia works have been heard in the U.S\, Europe and Asia on many events including ICMC and SEAMUS conferences.\nXu holds a Doctor of Musical Arts degree from the University of Cincinnati’s College-Conservatory of Music where he studied with Mara Helmuth\, and earlier degrees from CCM and the Shanghai Conservatory of Music. He is now assistant professor at Shanghai Jiao Tong University. His compositions are available on the ABLAZE label. \n  \nCecilia Suhr: Resonant Thresholds\nResonant Thresholds explores the liminal space between human expression and technologically mediated sound. Structured around a fixed audio score\, the work unfolds as a slowly transforming audiovisual environment in which live violin performance interacts with real-time electronic processing. Noise\, resonance\, and breath-like textures blur distinctions between acoustic intimacy and digital vastness\, allowing the materiality of sound to become porous and unstable. Through structured live comprovisation (composed improvisation)\, the performer actively shapes the unfolding sonic landscape\, while the processed audio simultaneously generates an evolving visual score that functions as a symbolic translation of sound. The work invites listeners to inhabit a threshold between perception and imagination\, where meaning emerges through the continuous negotiation between composed structure\, live performance\, and technological extension. \nAbout the artist\nCecilia Suhr is an award-winning intermedia artist\, multimedia composer\, researcher\, author\, and multi-instrumentalist (violin\, cello\, voice\, piano\, bamboo flute). Her honors include the Pauline Oliveros Award (IAWM)\, a MacArthur Foundation DML Grant\, the American Prize (Honorable Mention)\, Global Music Awards\, Best of Competition from BEA\, among other distinctions. Her work has been presented at ICMC\, SEAMUS\, NYCEMF\, EMM\, SCI\, ACMC\, Mise-En\, MoXsonic\, and many more. She is a Full Professor at Miami University Regionals. \n  \nFrancesco Perissi and Giovanni Magaglio: X6 – Hexaphonic Spatialized Guitar\n\nThe “X6 – Hexaphonic Spatialized Guitar” project is about an augmented electric guitar designed for 6.1 channel spatialization. With the X6 setup and a special breakout cable\, it is possible to manage the sound from a hexaphonic pickup\, which separates the guitar signal into six independent channels\, one for each string. These signals are sent to a computer\, where Max/MSP processes them in real time with filters\, loops\, sound manipulation\, and spatial projection. The first version of the project had a fixed structure controlled by a sequencer that automated the filters. In the latest version\, a flexible multi-channel filter matrix has been added\, together with inputs for voice\, electronic instruments\, and samples. This makes the performance more open and improvisational\, allowing for control over time\, sound layering\, spatial gestures\, and vocal or electronic transformations. The idea is to build a self-made hyper-instrument where the performer and the algorithms influence each other\, creating electroacoustic music distributed in space and combining different musical practices. The newest patch\, version 20\, also uses artificial intelligence. With machine learning tools (FluCoMa)\, the system can recognize instrumental gestures and automatically change filter settings. With neural synthesis (RAVE)\, it can modify the sound of each string by acting on the latent spaces of the model\, producing deep timbral changes. The project also includes an interactive audio-visual part made with software TouchDesigner\, where the screen is divided into six sections that represent the six guitar strings. The visuals are generated with AI using prompts inspired by Renaissance painting\, but mixed with modern themes such as social distortion\, bias\, and the perceptual effects of today’s hyper-technological world. Overall\, the concept points to a kind of “second Renaissance.” It suggests that we are living in a new era in which imagination is shaped not by traditional art forms or systems of patronage\, but by digital technology. This new technè inaugurates unprecedented creative possibilities\, while also raising ethical\, cognitive\, and epistemological questions that we are only beginning to grasp. \nAbout the artist\nFrancesco Perissi is a composer\, guitarist\, and sound engineer based in Florence. He teaches Computer Music at the “Maderna Lettimi” Conservatory in Cesena and is the creator of the “X6” project for hexaphonic spatialized guitar\, as well as the founder of “match”\, a meeting dedicated to electroacoustic improvisation. His research explores the expressive potential of technology in music\, with a focus on the relationship between instruments and sound spatialization. Using interactive devices\, multichannel systems\, and real-time processing\, he creates works for electronic music\, installations\, and live performance\, blending contemporary languages with avant-pop influences and emphasizing the relationship between body\, gesture\, and space.\nGiovanni Magaglio is a sound and visual artist whose work centers on concrete sound\, timbral transformation\, and the perception of acoustic space. He creates layered soundscapes that invite immersive and active listening. He teaches Multimedia at the Conservatory of Florence and works across installations\, theater\, and audiovisual productions for short and feature films. His practice investigates the interplay between image\, sound\, and perceptual space\, shaping sensory environments where reality and representation intersect.
URL:http://icmc2026.ligeti-zentrum.de/event/lunch-concert-4a/
LOCATION:Hamburg University of Technology\, Building I\, Audimax 2\, Denickestraße 22\, Hamburg\, 21073\, Germany
CATEGORIES:14-05,Concert,Music
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260514T180000
DTEND;TZID=Europe/Amsterdam:20260514T190000
DTSTAMP:20260505T121343
CREATED:20260415T122709Z
LAST-MODIFIED:20260421T201118Z
UID:10000123-1778781600-1778785200@icmc2026.ligeti-zentrum.de
SUMMARY:[Off-ICMC] Radioballett | Körperfunkkollektiv: "Fragment"
DESCRIPTION:Photo: Felix Konerding\n  \nRadioballett is an interactive performance that draws you into another world through wireless headphones\, where you and other participants can actively shape the space together.  \nThe piece “Fragment” explores the boundaries between private and public life through human experiences in both real and virtual worlds. It invites everyone to reflect on the balance between digital and “offline” existence and to engage with the interplay between social interaction and online networks. \nregistration required here \n  \nThe Off-ICMC\nMusic is what brings us together\, even when everything else pulls us apart.\nMusic everywhere—it is part of our everyday lives. And yet\, we’re hearing it performed live on analog instruments less and less. Instead\, it often reaches us through speakers or headphones\, as files\, from the cloud. What does music mean to you? What does it sound like today? Where does it begin—and where does it end?\nThe ligeti center invites you to listen more closely and discover new sounds—to explore\, experiment\, and play. This year\, ICMC HAMBURG 2026 revives an old tradition: the Off-ICMC\, a free and accompanying festival curated for the general public and anyone curious about computer music. \nAll Off-ICMC events are free of charge.  \n\n  \n \n 
URL:http://icmc2026.ligeti-zentrum.de/event/off-icmc-radioballet-korperfunkkollektiv-fragment/
LOCATION:Town Hall Square Harburg\, Harburger Rathausplatz 1\, Hamburg\, 21073\, Germany
CATEGORIES:14-05,Music,Off-ICMC
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260514T190000
DTEND;TZID=Europe/Amsterdam:20260514T210000
DTSTAMP:20260505T121343
CREATED:20260421T163025Z
LAST-MODIFIED:20260504T074930Z
UID:10000096-1778785200-1778792400@icmc2026.ligeti-zentrum.de
SUMMARY:Evening Concert 4B
DESCRIPTION:Concert 4B presents the full range of contemporary computer music in a chamber ensemble setting. Ensemble 404—Hamburg’s specialists in new music—navigates a program that spans highly spatialized sound worlds to audiovisual metamorphoses.\nExperience how physical instruments meet the precision of algorithms\, creating new hybrid identities in the process. \nThis Evening Concert is open to the public. Those without a conference pass can purchase a ticket here. \n  \nProgram Overview\nUnforeseen Metamorphic\nJoshua Rodenberg and Fumiaki Odajima \nKryptobioza\nLidia Zielinska \nTide\, breath\nZihan Wang \nEverybody Loves Me\nHoward Kenty \nPresent-Day Jakuchu Series: Butterfly Pictures “Inachis io”\nNaotoshi Osaka \nComing and Vanishing \nYixuan Zhao \nZusammenspiel I\nJavier Alejandro Garavaglia \nVesscape\nDanni Zhao and Congren Dai \n  \nAbout the pieces & artists\nJoshua Rodenberg and Fumiaki Odajima: Unforeseen Metamorphic\nA seven minute acousmatic performance explores perception as a field where sound becomes a medium of transformation. The work begins with pure sine waves tuned in just intonation\, forming a low intensity sonic layer that permeates the space rather than occupying the foreground. Slow modulation and close interval relationships generate micro beating and phase drift\, unfolding at the threshold of audibility and drawing attention to subtle shifts in listening.\nWithin this continuous membrane\, a second live system of modular synthesis enters as an autonomous partner. Instead of accompanying the sine field\, it negotiates with it\, introducing pulses\, harmonics\, and timbral pressure that can align\, destabilize\, or dissolve. The piece is shaped by interference\, emergent resonance\, and the physical behavior of sound in the room\, producing a shared acoustic field that changes moment to moment. \nAbout the artists\nJoshua Rodenberg is a sound and video artist based in Doha\, Qatar\, where he is Head of the Innovative Media Studios and Assistant Professor at Virginia Commonwealth University School of the Arts in Qatar. His practice connects art\, technology\, and environmental research\, translating natural oscillations and field data into live sonic and visual performance. In 2024 he received the VCU Quest Research Grant and participated in the Arctic Circle Artist Residency in Svalbard. His work has been presented internationally\, including the International Computer Music Conference in Boston\, Haus 1 in Berlin\, and EAI ArtsIT 2025 in Dubai. \nFumiaki Odajima is a Tokyo and Amami based artist working with multichannel pure sine waves\, just intonation\, and long timescale transformations to shape perceptual listening environments. He holds a BFA from The Ohio State University and an MFA from Virginia Commonwealth University. Recent projects focus on large scale sine wave diffusion\, exploring interference\, micro beating\, and sound as material at sensory thresholds. Selected performances include Synthesis at ART SPACE BAR BUENA in 2024 and Re:Synthesis at Safi Heimlichkeit Nikai in 2024\, and he released Icecream Daydreaming in 2020 with the improvisational unit kani kani club. \n  \nLidia Zielinska: Kryptobioza\nCryptobiosis is a reversible\, temporary state of extreme reduction in life activities of a composer\, as a response to unfavourable environmental conditions. \nAbout the artist\nLidia Zielinska (*1953) – Polish composer\, professor-emeritus of composition and director of the Electroacoustic Music Studio at the Poznan Academy of Music; numerous awards for orchestral\, multimedial\, electroacoustic works; books\, papers\, guest lectures\, summer courses in Europe\, both Americas\, Asia\, New Zealand; vice-president of the Polish Society for Electroacoustic Music. \n  \nZihan Wang: Tide\, breath\nThis work integrates spatialised fixed-media electronic music with semi-improvised acoustic instrumental performance. Animated scores and sound scores are employed to guide performers and to synchronise their actions with the electronic sections. The compositional focus is spatial counterpoint which extending the interplay of traditional contrapuntal voice relationships into three-dimensional space. This approach generates perceptible parallels\, interweaving\, imitation\, and conflict between instrumental and electronic elements through the parameters of position\, distance\, diffusion\, and timbre. Spatial attributes therefore function as primary compositional parameters rather than post-production effects.\nThe work is inspired by reflections on the macro and micro-structures of two kinds of sound: human crowds and natural environments. Through extensive field recording\, I observed a shared underlying principle: both soundscapes arise from the continuous accumulation and interaction of innumerable micro-sonic events\, producing macro-level shifts in energy\, fluctuations in density\, and emergent directional tendencies. For example\, footsteps\, conversations\, breathing\, and whispers in a crowd collectively form an ever-shifting granular timbre. Similarly\, natural sounds such as rain\, wind\, rivers\, and flocks of birds can exhibit comparable behaviours. This work seeks to establish a perceptual and structural connection between these two sound worlds through electronic composition. \nAbout the artist\nZihan Wang is an electroacoustic music composer\, film composer\, and sonic artist. He is currently a post graduate research student at Monash University\, Melbourne\, Australia\, where his work investigates compositional strategies for ambisonics-based environments. His research engages with Robert Normandeau’s concept of timbre spatialisation and Denis Smalley’s theory of spectromorphology\, with a particular emphasis on timbre\, spatial articulation\, and electroacoustic composition. His creative practice includes fixed-media electroacoustic works\, sound installations\, animated score composition\, and film scoring. His work has been presented at venues and conferences including TENOR 2025 and the Melbourne International Film Festival (MIFF). \n  \nHoward Kenty: Everybody Loves Me\n“Everybody Loves Me” is a piece for voice\, percussion\, and live electronics that takes the words of Donald Trump as its only source material to depict a hellish kinetic nightmare. For this incarnation\, I would be the vocal performer\, and control the electronics onstage. I would need a percussion performer and the percussion itself to be provided by the festival. \nAbout the artist\nHowie Kenty is a Brooklyn-based composer and performer\, occasionally known by his musical alter-ego\, Hwarg. His music\, called “remarkable” with “astonishing poetic power” (Intl Compendium Prix Ars Electronica)\, is stylistically diverse\, encompassing ideas from contemporary classical\, electronic\, rock\, and ambient genres\, as well as sound art\, political issues\, and visual and theatrical elements. Howie is an Assistant Professor in Studio Composition at Purchase College. Listen at http://www.hwarg.com. \n  \nNaotoshi Osaka: Present-Day Jakuchu Series: Butterfly Pictures “Inachis io”\nIto Jakuchu (1716–1800) was a mid Edo period Japanese painter renowned for his brilliantly colored depictions of plants and animals. I have long been fascinated by his works. There was a time when I myself collected butterflies\, and I was deeply captivated by the designs and patterns on their wings. This piece is inspired by those wing patterns\, transforming their visual designs into musical imagery. Jakuchu also painted butterflies\, and with the idea of composing as if I myself were Jakuchu painting a picture\, I titled this work as part of my “Present-Day Jakuchu” series.\nWhen visual and auditory perception are viewed at a higher level of abstraction\, they share many common qualities. In this work\, the visual impressions of the butterfly are linked to the sounds and musical structure.\nInachis io (The European Peacock Butterfly) has eye spot patterns on a reddish brown ground\, reminiscent of a peacock’s feathers\, which gives the species its name. Although it is not found in North America\, South America\, or Oceania\, it is widely distributed across the Eurasian continent\, including Europe and Asia. Many butterflies of the Nymphalidae family are elegant in appearance\, and this species is no exception; it can be seen in many places. In the composition\, I developed the music around two motifs: the background coloration and the eye spot patterns. Unlike my previous work\, this piece does not depict flight or resting behavior; instead\, it focuses solely on the coloration and patterns visible when the wings are fully spread.\nThis piece was originally written in 2023 for violin and piano. For this performance\, it has been newly expanded with an added electroacoustic part\, making this the premiere of the updated version. The electroacoustic materials were created as fixed media\, primarily using granular synthesis and FM synthesis. However\, the sound files are structured as passage level cues\, and their playback timing is performer controlled and triggered in real time. \nAbout the artist\nNaotoshi Osaka received his Master’s degree from Waseda University and\, after working at NTT Laboratories\, has pursued research and composition in electroacoustic music. His works have been selected for the ICMCfive times\, and for the New York City Electroacoustic Music Festival (NYCEMF)three times. He served as President of the Japan Society for Sonic Arts (JSSA) from 2009 to 2018. He is currently a research fellow at Waseda University and Tokyo Denki University\, holds a Ph.D. in engineering\, and is Professor Emeritus at Tokyo Denki University. \n  \nYixuan Zhao: Coming and Vanishing  \nComing and Vanishing is an Audiovisual work for solo flute and electronics that explores a transient and unstable phenomenon.\nThe flute interacts closely with the electronic layer through air sounds\, breath tones\, and extended techniques. Pitch and noise are deliberately blurred\, allowing the instrument to function not as a melodic foreground but as a fluctuating presence. The electronic part is primarily built from processed human whispers and breaths\, materials detached from linguistic meaning. Through subtle layering and diffusion\, the voices lose semantic clarity and become abstract sonic matter. Acoustic and electronic sound exist in a continuous state of mutual negotiation\, shaping and destabilizing one another in real time.\nThe visual draws inspiration from traditional Chinese landscape painting while incorporating a surrealist sensibility. Through gradual transformations of light and shadow\, the imagery reveals and amplifies microscopic details within the sound. Rather than illustrating the music\, the visuals function as a parallel perceptual layer\, extending the listening experience into a spatial and visual field.\nSound and visual are not merely layered media\, but revealing a dynamic process\, existing only within the persistent tension between appearance and disappearance\, presence and loss\, immediacy and dissolution. \nAbout the artists\nComposer: ZHAO Yixuan is a composer\, a lecturer at the Dept. of Music AI and Music Information Technology\, Central Conservatory of Music\, China\, and a visiting researcher at the Royal Birmingham Conservatoire\, UK.\nShe has been dedicated to exploring the practice of digital audio and artificial intelligence in music composition and collaborating with performers to search for more possibilities in technological performance environments. Her composition spans interactive music\, electroacoustic music\, contemporary music\, and new media art. \nVisual Designer: WU Shuangqi (/’su:ki/) is an inter-media creator and visual-physical experimenter engaged in visual media\, contemporary theatre\, physical improvisation\, visual design\, sound\, audiovisual\, photography\, editing\, etc.\nHer creations are mainly based on physical experience\, deconstructing and visually outputting the body and external information\, intending to explore the assembly\, pattern\, motivation and form in the algorithms of flesh and behaviour\, to gain extension in perversion and mutation. \n  \nJavier Alejandro Garavaglia: Zusammenspiel I\nComposition in which viola and clarinet are combined with spectral digital effects and multi-channel spatialization. The idea of “playing together” contained in the title in German was the starting point of the artistic working process. This is clearly noticeable from bar 1\, as the chosen pitches for both instruments are intertwined so that\, together with the real-time electronics\, all 3 create timbres that portray their fusion rather than the sound of each instrument alone. In addition\, the composition presents innovative aspects in terms of real-time digital effects\, for example\, the accumulation and evaporation of spectra of both instruments captured by the electronics in real time or the combination of different techniques (among others Ambisonics) responsible for the particular spatialization of the electronics. Moreover\, the composition is another example of the complete automation of the electronics\, a technique developed by the author for years.\nZUSAMMENSPIEL I was made possible thanks to the support and funding provided by MUSIKFONDS Deutschland. \nAbout the artist\nAward-winning composer\, violist\, sound artist and retired university music professor with a broad and interdisciplinary approach to digital art and related technologies. His work focuses primarily on various aspects of music/sound composition and performance supported by computing\, with a constant search for new sonic experiences combining new developments in computer-aided sound synthesis\, live interaction\, extended instrumental techniques and sound spatialisation. Compositions are performed/broadcast in Europe\, America and Asia in world-renowned concert halls/broadcasters and include electroacoustic music (acousmatic\, interactive\, multimedia)\, instrumental music (e.g.\, solo instrument\, ensemble & orchestra) and sound art (e.g.\, installations). Plenty of his acousmatic music can also be found on commercial CDs by Edition DEGEM\, Cybele\, EMF\, etc. \nInfo: https://tinyurl.com/JavierGaravaglia \n  \nDanni Zhao and Congren Dai: Vesscape\nThis work repeatedly performs the same action: pouring sound into a hollow system. \nThe breath of the flute is not treated as lyrical material\, but as a continuously failing act\, namely\, blowing\, gasping\, breaking\, and losing control. Pitches emerge again and again\, yet never settle. The electric bass introduces low-frequency pressure and inertia\, an irresistible downward pull that keeps the entire sound field at the edge of overload. \nA live electronic system analyses the performed sound using AI\, distributing features such as breath\, impact\, and pitch deviation across multiple “vessel” sound sources and visual entities. In its touring performance version\, the original vessel installation has been translated into an 8.1 spatial audio field\, allowing the acoustic presence and directional behavior of the vessels to be simulated through multichannel diffusion. These vessels are not metaphors for containers; they function as receivers of pressure\, being filled\, stretched\, and forced into vibration. The harder the music pushes\, the more unstable the vessels become; when the performer attempts to regain control\, the system exposes even more fractures. \nThe structure begins with an almost violent injection of energy\, gradually shifting into a direct confrontation between body and object. Unstable registers and microtonal deviations are continuously amplified; rhythm is fragmented into dense\, short bursts of broken gestures\, until the system briefly collapses. In the end\, sound is exhausted\, leaving only residual breath and unfinished pitch afterimages. \nThis is not a work about “generation”. It is a sustained experiment in pressure\, control\, capacity\, and limits. The system never truly responds to the performer; it merely records how pressure fails\, again and again. \nAbout the artists\nDanni Zhao is a Chinese composer and electronic music artist. She studies Electronic Music Composition at the Central Conservatory of Music\, where she received the National Scholarship and recommendation for postgraduate study. Her works have won awards at international composition and electronic music competitions and have been presented at events such as ICMC and major music festivals. She is active in concert music\, film\, documentary\, theatre\, and game scoring. \nCongren Dai is a PhD candidate at the Central Conservatory of Music\, specialising in Music AI. He holds an MRes in AI and Machine Learning from Imperial College London and an MSc in Data Science from King’s College London. Having interned in computer vision at Google and engaged in music AI projects at Huawei\, he now applies Large Language Models to musical score understanding and instrument recognition in his research\, alongside contributions to continual learning. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/evening-concert-4b/
LOCATION:Friedrich-Ebert-Halle\, Alter Postweg 34\, Hamburg\, 21075\, Germany
CATEGORIES:14-05,Concert,Music
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260514T213000
DTEND;TZID=Europe/Amsterdam:20260514T233000
DTSTAMP:20260505T121343
CREATED:20260421T163434Z
LAST-MODIFIED:20260428T081807Z
UID:10000069-1778794200-1778801400@icmc2026.ligeti-zentrum.de
SUMMARY:Club Concert 4C
DESCRIPTION:Program Overview\nMerzmania\nGintas Kraptavicius \nImprovisation for Spheres \nCalvin McCormack \nMarsia 3\nJonathan Impett \noscheat\nMoritz Wesp\, Eric Haupt and Victor Gelling \nThe Skin of the Earth: Fragments\nPaulo C. Chagas \nThe Long Now III \nCat Hope and Juan Parra Cancino \nTape Microscopy\nAndrew Loveless \n  \nAbout the pieces & artists\nGintas Kraptavicius: Merzmania\nElectroacoustic live electronics performance made using my own created instrument made from computer\, Plogue Bidule software & midi controller assigned to VST plugins. All software parameters controlled\, altered live in a real time during performance using knobs & sliders of midi controller attached to VST plugins parameters. Performance made from synthesized sounds\, no samples or before recorded sounds as fields’ recordings are used. Merzmania it is piece connecting classical music skills with today noise music (slight allusion to noise icon – Merzbow). Merzmania main playing method is real time interaction with computer which i am using on all my live compositions. I am using Computer as Music Instrument just like any other acoustic music instrument. Like a guitar. Onstage i get the same emotional feeling playing with computer as playing with any other acoustic/electric instrument. Main thing in a live performance it is energy and emotion to the pot like to rock’n’roll concerts. Merzmania featuring the motif of the Lithuanian folk song “Teka\, teka šviesi saulė” (“The sun is rising\, the bright sun is rising”). \nAbout the artist\nGintas K (Gintas Kraptavičius) a Lithuanian sound artist\, composer living and working in Lithuania.\nNowadays Gintas is working in the field of digital experimental and electroacoustic music\, making music for films\, sound installations. His compositions are based on granular synthesis\, live electronic\, hard digital computer music\, small melodies. Collaborations with sound artists @c\, Paulo Raposo\, Kouhei Matsunaga\, David Ellis and many others. He has released numerous of records on labels such as Cronica\, Baskaru\, Con-v\, Copy for Your Records\, Bolt\, Creative Sources\, Sub Rosa and others.\nSince 2011 member of Lithuanian Composers Union. He has presented his works\, performed at various international festivals\, conferences\, symposiums as Transmediale.05\, Transmediale.07\, ISEA2015\, ISSTA2016\, IRCAM forum workshop 2017 \, xCoAx 2018\, ICMC2018\,ICMC2022 ICMC2025 ICMC-NYCEMF 2019\, NYCEMF 2020 \, NYCEMF 2021\, NYCEMF 2022\, NYCEMF 2023\, NYCEMF 2024\, NYCEMF 2025\, Ars Electronica Festival 2020\,. Ars Electronica Festival 2023 Ars Electronica Festival 2024 . IRCAM forum workshop 2025 Paris Ars Electronica Forum Wallis 2025\, FARM 2025\nArtist in residency at DAR 2016\, DAR 2011 \, MoKS 2016\, KKKC 2023\nWinner of the II International Sound-Art Contest Broadcasting Art 2010 \, Spain.\nWinner of The University of South Florida New-Music Consortium 2019 International Call for Scores in electronic composition category. \n  \nCalvin McCormack: Improvisation for Spheres\nImprovisation for Spheres is a live electronic work for two custom spherical controllers with reactive visuals. Each sphere combines surface-embedded capacitive touch pads with an inertial measurement unit\, wirelessly transmitting sphere orientation and touch sensing. Each sphere sits in a chalice cradle\, with a ring of touch sensors embedded around the rim. The spherical form factor affords intuitive spatialization\, the sphere’s rotation corresponds to the sound’s position in ambisonics\, making spatial movement as immediate and embodied as pitch selection. Touch pads support expressive melodic and harmonic performance\, and skin-touchpad contact area allowing dynamic and timbral expression. The work explores the sphere as both instrument and spatializer\, where single gestures unite melodic\, timbral\, and spatial control. This audiovisual improvisation demonstrates how spatialization can be performed artistically rather than mixed\, elevated from post-production to real-time expression. \nAbout the artist\nCalvin McCormack is an MST student at the Center for Computer Research in Music and Acoustics (CCRMA) at Stanford University. His research focuses on accessible HCI and inclusive design for musical applications. He also conducts research in auditory neuroscience and plays jazz guitar. \n  \nJonathan Impett: Marsia 3\nThis is the final piece of a series written for the installation Apollo e Marsia in 2024. This work expands the moment in time represented by Tintoretto in his painting La gara tra Apollo e Marsia (c.1545). Apollo\, playing a bowed instrument with sympathetic strings\, has been challenged by the satyr Marsia\, playing a woodwind instrument\, to see who is the greater musician. Ovid’s retelling of the story describes a terrible end for Marsia\, but in the moment depicted by Tintoretto both musicians are waiting for the judgement of Midas\, both trying to remember and assess what they and their competitor have just played. \nThe piece is therefore a play on the nonlinearity of memory under stress as both try to replay the performances in their mind. Moments are recalled\, replayed or intrude\, but are always changing in their reconstruction. Memories of themselves and of the other constantly modulate each other. New constructs emerge in memory through this process\, and obsessive recall generates attractors and mirrors; we know from recent neuroscience that remembering and imagining are essentially the same reconstructive process. \nAt its root\, the material all derives from two hymns to Apollo inscribed in stone at Delphi\, arguably the earliest remaining instances of music notation\, and likewise fragmented by erasures. Across time\, musicians have attempted to reconstruct this partially-lost memory in different ways\, creating new formations in the process. \nHere\, the Delphic material is subject to layers of nonlinear memory process\, implemented in Open Music as forward- and backward-moving wave phenomena\, sweeping up emergent patterns as they develop. This produces a score that often requires the performer to assimilate a polyphony of musical materials and physical behaviours as layers of memory. Analogous processes are used in the recorded and live sound processing\, largely through physical modelling\, cross-resynthesis and filtering – digital and analogue. This is in turn heard through a model of the stringed instrument of Marsia’s opponent\, Apollo. An AI brings the live performance into relation with the behaviours\, memory and projection of both competitors. \nAbout the artists\nJonathan Impett (1956) is a composer\, trumpet player and writer. His work is concerned with the discourses and practices of contemporary musical creativity\, particularly the nature of the technologically-situated musical artefact. Activity in the space between composition and improvisation has led to continuous research in the areas of interactive systems\, interfaces and modes of collaborative performance. Recent works combine installation\, live electronics and computational models with notated and improvised performance\, using fluid dynamics as a unifying behavioural model. A new project Anamnesis takes a radical approach to AI\, identifying creative paths implied but unnoticed. He leads the research group “Music\, Thought and Technology” at the Orpheus Institute\, Ghent. \nRichard Craig (alto flute) was born in Glasgow. He studied at the Royal Conservatoire of Scotland and the Conservatoire de Strasbourg. He performs with groups such as Musikfabrik\, Klangforum Wien\, ELISION and in Scandinavia with CAPUT\, Kammarensemblen. He has released two solo discs of contemporary works\, Vale and Inward\, and recorded for Another Timbre\, Wergo\, FHR\, Métier\, as well as SWR\, BBC and Finnish Radio. Not only a celebrated advocate of contemporary music\, his recent album of the Telemann Fantasias and his improvisations was lauded as “bold\, beautiful and clever” (Gramophone). He is also an improviser\, composer and teacher\, currently Director of Performance at the University of Edinburgh. \n  \nMoritz Wesp\, Eric Haupt and Victor Gelling: oscheat\nThis contribution presents oscheat\, a work-in-progress OSC-based interface\, designed to extend ensemble communication beyond conventional musical gestures. By providing a modular and user-friendly environment\, oscheat allows performers to directly control each other’s digital instruments\, enabling novel forms of interaction\, role-sharing\, and emergent musical structures in real time.\nOur instrumental system is structured into three functional sections reflecting core musical building blocks: synthesizers for melodic and harmonic material\, sequencers for rhythmic organization\, and samplers for vocal and sound-based material.\nAdditional functionality includes real-time MIDI recording and looping\, pitch mapping with support for alternative tunings\, spatialization\, and global macro controls for large-scale structural manipulation. Each performer manages their instruments individually while making the controls accessible through oscheat.\nMoritz Wesp\, Eric Haupt and Victor Gelling are playing an eight-minute improvisation\, demonstrating oscheat’s potential for rapid musical exchange\, shared authorship\, and collective decision-making. By exposing critical control parameters to all participants\, the interface encourages social negotiation and flexible role allocation\, making it relevant for both creative research and educational contexts. \nAbout the artists\nMoritz Wesp lives in Cologne (GER) and plays trombone\, virtual trombone and other instruments that he designs\, programs and builds. As an improviser he is working with different ensembles like Mariá Portugal Erosao\, Matthias Muche’s Bonecrusher or the Simon Rummel Ensemble. Besides this he composes music and is part of the Audio-VR project SONA. \nEric Haupt is a guitarist and composer working in experimental music and punk. He completed his Bachelor of Music at the HfMT Cologne in 2018. He is a founding member of the ensembles Now My Life Is Sweet Like Cinnamon and Lawn Chair\, as well as the initiator of the experimental game-show performance Sport1. His music has been presented at festivals throughout Europe and collaborations include internationally renowned producers Olaf O.P.A.L. and Chris Coady. His punk compositions have been broadcast on international radio stations such as BBC Radio 6 Music. \nVictor Gelling is an improviser and composer who uses stringed instruments including but not limited to upright bass\, tenor banjo\, Pedalsteel- and Nonpedalsteel-Guitars in addition to pedals\, synthesizers and barely working self-coded computer programs to create sounds. Their work spans genres from jazz to noise to electric cowboy songs to complex music\, which culminates in their large ensemble works with Trash & Post-Chaotic Music\, their alt-country/post-punk alias Slowklahoma\, solo works or their playing in the Jorik Bergman Trio. \n  \nPaulo C. Chagas: The Skin of the Earth: Fragments\nAbout the artists\nPaulo C. Chagas is a Brazilian-American composer and Professor of Composition at the University of California\, Riverside. With over 220 works across orchestral\, chamber\, electroacoustic\, audiovisual\, and multimedia formats\, his work integrates advanced technology and expressive depth. He studied in Brazil\, Belgium\, and Germany\, earning a Ph.D. from the Université de Liège\, and was composer-in-residence at the WDR Electronic Studio. A Fulbright Scholar (Berlin\, 2022–23) and ICMA board member\, his work is widely performed and published.\nhttps://solo.to/paulocchagas \nBrazilian soprano Adriane Queiroz trained in Pará\, Missouri\, and Vienna. Since 2002/03 she has been a member of the Staatsoper Unter den Linden\, performing roles such as Pamina\, Micaëla\, Susanna\, and Liù. She has appeared at major venues including the Hamburg State Opera\, Semperoper Dresden\, and Wiener Festwochen\, and in concerts at the Musikverein and Konzerthaus Vienna. Her repertoire spans Mozart to contemporary works\, including Schönberg’s Erwartung and Nono’s La fabbrica illuminata\, with recent premieres under Sir Simon Rattle.\nwww.adrianequeiroz.com \n  \nCat Hope and Juan Parra Cancino: The Long Now III  \nThis a scored work for live modular synthesiser performance\, with a backing track. It explores the potential of digital notation for modern electronic instruments\, in this case\, the contemporary modular synthesiser. It is named after the Long Now Foundation\, that aims to provide counterpoint to today’s accelerating culture by encouraging long-term thinking\, fostering responsibility in the framework of the next 10\,000 years. Music provides complex answers to the question of “How Long is Now?”\, and in this work\, a slow descent into very low sound by the performer\, where pitch is either uncontrollable or almost inaudible\, reflects the limits of human action in and perception of sound as it passes through time\, highlighting that there may be other ways to listen\, and other ways to experience our passing through time.\nThe fixed media part of this piece was created at EMS in Sweden\, using the Buchla 200’s 4 x 259 waveform generators and the score is read on the Decibel ScorePlayer\, which also produces the fixed media part. \nAbout the artists\nJuan Parra Cancino studied Composition at the Catholic University of Chile and Sonology at the Royal Conservatoire The Hague\, earning a Master’s degree focused on electronic music composition and performance. In 2014\, he completed his PhD at Leiden University with his thesis “Multiple Paths: Towards a\nPerformance Practice in Computer Music. Parra has been a research fellow at the Orpheus Institute since 2009. \nCat Hope is a award winning Australian composer who focuses on the extremes of sound – from extreme noise to barely audible delicacy. Her works have been performed world wide by ensembles such as Yarn Wire (US)\, the BBC Scottish Symphony (UK) and her works are published internationally on labels such as Hat (Hut) Art\, with her monograph CD Ephemeral Rivers winning the German Critics Prize in 2017. Cat is a represented composer with the Australian Music Centre\, and her music is published by Material Press. Her first opera\, Speechless\, won the Best New Dramatic work in the 2020 Art Music Awards. \n  \nAndrew Loveless: Tape Microscopy\nThis performance explores the musical potential of playback speed manipulation\, controlled feedback\, and layered sound material using a dual-transport digital tape instrument. The source of the sound material is the distinct\, high-pitched whine of a CRT television’s flyback transformer\, which was chosen for its nearly inaudible high-frequency energy and analog character. The sound is heard briefly at normal speed before being slowed almost to a halt to reveal its hidden textures. Inspired by the tape experiments of pioneer Éliane Radigue\, this performance utilizes two virtual tape transports that interact through carefully tuned speed relationships\, harmonizing and phasing against one another. Live overdubbing and feedback routed between the transports create new layers and delays\, shaped by the performer’s listening and interactions. A real-time visualization shows the speed of each transport’s spinning reels\, adding an engaging layer that helps in following the unfolding sounds. \nAbout the artist\nAndrew Loveless is a graduate student in Music Technology at the Georgia Institute of Technology. Their work focuses on performance-centered instrument design and improvisation\, with an emphasis on preserving tape music techniques and making them more accessible through hands-on\, educational tools. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/club-concert-4c/
LOCATION:ligeti center\, Production Lab (10th floor)\, Veritaskai 1\, Hamburg\, 21079\, Germany
CATEGORIES:14-05,Club Concert,Music
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260515T183000
DTEND;TZID=Europe/Amsterdam:20260515T213000
DTSTAMP:20260505T121343
CREATED:20260415T122932Z
LAST-MODIFIED:20260417T115457Z
UID:10000124-1778869800-1778880600@icmc2026.ligeti-zentrum.de
SUMMARY:[Off-ICMC] Sound Bar: "Sono\, ergo sum." – I sound\, therefore I am.
DESCRIPTION:Photo: Soundbar Kollektiv\n  \nThe Soundbar is a performative pop-up bar that brings together socializing\, drinks\, and jam sessions. It serves as a workshop and experimental space\, offering an environment for exploring sound\, finding inspiration\, and connecting with others. What does your favorite drink sound like? Join us for Soundbar’s vibrant sound journeys. Let your glasses sing and discover new levels of sensory experience at the bar.  no registration required \n  \nThe Off-ICMC\nMusic is what brings us together\, even when everything else pulls us apart.\nMusic everywhere—it is part of our everyday lives. And yet\, we’re hearing it performed live on analog instruments less and less. Instead\, it often reaches us through speakers or headphones\, as files\, from the cloud. What does music mean to you? What does it sound like today? Where does it begin—and where does it end?\nThe ligeti center invites you to listen more closely and discover new sounds—to explore\, experiment\, and play. This year\, ICMC HAMBURG 2026 revives an old tradition: the Off-ICMC\, a free and accompanying festival curated for the general public and anyone curious about computer music. \nAll Off-ICMC events are free of charge.  \n\n  \n \n 
URL:http://icmc2026.ligeti-zentrum.de/event/off-icmc-sound-bar-sono-ergo-sum-i-sound-therefore-i-am/
LOCATION:ligeti center\, Production Lab (10th floor)\, Veritaskai 1\, Hamburg\, 21079\, Germany
CATEGORIES:15-05,Music,Off-ICMC,Performance
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260515T193000
DTEND;TZID=Europe/Amsterdam:20260515T203000
DTSTAMP:20260505T121343
CREATED:20260415T123232Z
LAST-MODIFIED:20260417T115504Z
UID:10000125-1778873400-1778877000@icmc2026.ligeti-zentrum.de
SUMMARY:[Off-ICMC] Experimental Reading: Harburg. Das Buch – Excursions in Voice\, Photo & Music (German)
DESCRIPTION:Credits: Junius Verlag\n  \nAuthor Bärbel (Bascha) Wegner\, photographer Steven Haberland\, and musician Clarks Planet bring together text\, images\, and sound in a multi-layered exploration of the city of Harburg. Storytelling meets improvised music\, photographs interact with sound and field recordings.  \nThe familiar takes on new shapes\, improvisation unfolds—opening up fresh perspectives on the neighborhood\, not least from the vantage point of the Production Lab on the 10th floor.  \nIn German only.\nregistration required here \n  \nThe Off-ICMC\nMusic is what brings us together\, even when everything else pulls us apart.\nMusic everywhere—it is part of our everyday lives. And yet\, we’re hearing it performed live on analog instruments less and less. Instead\, it often reaches us through speakers or headphones\, as files\, from the cloud. What does music mean to you? What does it sound like today? Where does it begin—and where does it end?\nThe ligeti center invites you to listen more closely and discover new sounds—to explore\, experiment\, and play. This year\, ICMC HAMBURG 2026 revives an old tradition: the Off-ICMC\, a free and accompanying festival curated for the general public and anyone curious about computer music. \nAll Off-ICMC events are free of charge.  \n\n  \n \n 
URL:http://icmc2026.ligeti-zentrum.de/event/off-icmc-experimental-reading-harburg-das-buch-excursions-in-voice-photo-music-german/
LOCATION:ligeti center\, Production Lab (10th floor)\, Veritaskai 1\, Hamburg\, 21079\, Germany
CATEGORIES:15-05,Music,Off-ICMC,Performance
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260515T200000
DTEND;TZID=Europe/Amsterdam:20260515T220000
DTSTAMP:20260505T121343
CREATED:20260421T171512Z
LAST-MODIFIED:20260430T223928Z
UID:10000177-1778875200-1778882400@icmc2026.ligeti-zentrum.de
SUMMARY:Evening Concert 5B (Lübeck)
DESCRIPTION:Program Overview\nImprovising Machine #7325: Inside My Trumpet\, Again\nJeff Kaiser \nThe Letter\nMinho Kang \nMoloch whose mind is pure machinery!\nEric Lyon \nTidal Unit for Sonic Activities\nIlia Viazov \nRhythmic Traces | Twisted Electronics\nNicola Leonard Hein \nFound Violin x Aromantic Hobby \nDong Zhou \nTokens & Strings: an improvisation between an electric guitarist and a local LLM\nOlivier Jambois \n  \nAbout the pieces & artists\nJeff Kaiser: Improvising Machine #7325: Inside My Trumpet\, Again\n“Improvising Machine #7325: Inside My Trumpet\, Again” places the audience inside a trumpet\, exploring the instrument’s interior sonic world through an immersive human–machine improvisation system. The work is built from an extensive\, purpose-built sample library captured by placing microphones deep within the instrument. These samples document the mechanical sounds and embodied actions of trumpet performance without the instrument being played traditionally—collections of the sound of valves descending\, springs releasing\, air being compressed and released by slides\, valve caps loosening\, spit-valve gurgles\, and a range of non-tonal lip\, air\, and tongue sounds produced through the mouthpiece and leadpipe. \nTwenty-eight autonomous virtual agents (“robots”)\, authored by the composer in Max/MSP and hosted in Ableton Live\, inhabit a 360-degree ambisonic field surrounding the audience. Each agent draws from its own subset of the sample library and listens to the live trumpet performance in real time. Their behaviors fluctuate between responsive and indifferent\, generating shifting environments that range from highly chaotic to unexpectedly calm. As a result\, the improvising performer becomes entangled with a machine ensemble that both reflects and subverts the human gestures\, creating a continuously changing dialogue between human and technological agents. \nAbout the artist\nJeff Kaiser is a trumpet player\, media technologist\, and scholar. Classically trained as a trumpet player and composer\, Kaiser now takes an integrative\, systemic view that involves his traditional instrument\, emergent technology (in the form of custom interactive/generative software and hardware interfaces)\, space\, and audience: all being critical and integral participants in his performances. He gains inspiration and ideas from the rich history of experimental improvisation and composition\, as well as cognitive science\, and the vast timbral and formal affordances provided by combining traditional instruments with new and repurposed technologies. The roots of his music are firmly in the experimental traditions within jazz\, improvisation\, and Western art music practices. Kaiser is currently Associate Professor of Music Technology and Composition at the University of Central Missouri. \nMore information at https://jeffkaiser.com/ \n  \nMinho Kang: The Letter\nThe Letter is a work of consolation created using an FFT Channel Vocoder with Additive Synthesizer. \nHistorically\, the vocoder was developed during wartime to enable communication among allies. It reduces wideband speech to a narrower band for transmission and then reconstructs it at the receiver. In short\, a vocoder sends important words over distance and makes their faint traces audible again.\nAs a composer\, creating music is much the same. I keep listening to people and the world\, their voices. Then\, I compress\, interpret\, and reassemble those words in my own terms and offer them back as a piece.\nUnlike the vocoder’s original purpose\, in a time when war is no longer shocking news\, I wanted to use this technology to carry comfort. The lyrics come from a poem I wrote during my military service to endure a hard period (not in combat). This piece does not present a political agenda; it is a letter to anyone facing painful circumstances\, on any side\, in any degree. \nTechnically\, I aimed to design a vocoder with greater precision than a conventional channel vocoder. Instead of using bandpass filters\, I applied Fast Fourier Transform (FFT) analysis to collect more detailed and accurate amplitude information\, which allowed clearer rendering of vowel formants. This approach led to the creation of a Max for Live (M4L) FFT Channel Vocoder patch.\nI also developed an Additive Synthesizer M4L patch capable of producing a wide spectrum of sounds\, from pure sine waves to noise. When combined with the vocoder\, this synthesizer allows the clarity and harmonicity of speech to change according to the lyrics. Since the text relates to the transformation of light\, I used this Additive Synthesizer to achieve a tone painting that reflects those luminous changes. \nAbout the artist\nMinho Kang is a Korea-born composer and computer musician. His artistic interests\, which began in popular music and moved into contemporary music\, have expanded into electronic music at the intersection of technology and art. Drawing on introspective reflection and close observation of the world\, he brings diverse imaginings into his works.\nHis music has been presented at conferences and festivals including SEAMUS\, ICMC\, and the TurnUp Multimedia Festival. He completed his bachelor’s degree at Indiana University\, where he studied composition with Jeremy Podgursky\, Aaron Travers\, P. Q. Phan\, David Dzubay\, and Don Freund\, and electronic music with John Gibson and Chi Wang at the Center for Electronic and Computer Music. \n  \nEric Lyon: Moloch whose mind is pure machinery!\nAllen Ginsburg’s poem Howl was published in 1956\, the same year as the Dartmouth Summer Research Project on Artificial Intelligence. The two events portend seemingly incompatible futures that nonetheless are both with us now. A bursting forth of cultural chaos in an “armed madhouse” and the technocratic reduction of intelligence to code. Ginsburg’s poem’s ritualistic and repetitive rant about Moloch inspired this performance\, a tone poem that derives its sounds from two main sources – AI-generated music and the OB-Xd virtual analog synthesizer VST plugin manipulated using the Slewable Utility for Random Parameters (SLURP) designed by the composer. The performance interface consists of a Korg nanoKONTROL2 unit and the Google MediaPipe face landmarker. \nAbout the artist\nEric Lyon is a composer and audio researcher focused on high-density loudspeaker arrays\, dynamic timbres\, virtual drum machines\, and performer-computer interactions. His audio signal processing software includes “FFTease” and “LyonPotpourri.” He has authored two computer music books\, “Designing Audio Objects for Max/MSP and Pd\,” a guidebook for writing audio DSP code for live performance\, and “Automated Sound Design\,” a book that presents technical processes for implementing oracular synthesis and processing of sound across a wide domain of audio applications. He has written extensively about the possibilities of multichannel spatial audio. In 2016-17\, Lyon was guest editor for the Computer Music Journal on Volume 40(4) and 41(1) covering various aspects of High-Density Loudspeaker Arrays (HDLAs). \nIn 2015-16\, Lyon architected both the Spatial Music Workshop and Cube Fest at Virginia Tech to support the work of other artists working with HDLAs. In 2025 he co-created the Spatial Audio Tidepool to provide technical instruction for creative uses of high-density loudspeaker arrays. Lyon’s compositional work has been recognized with a ZKM Giga-Hertz prize\, MUSLAB award\, the League ISCM World Music Days competition\, and a Guggenheim Fellowship. Lyon teaches in the School of Performing Arts at Virginia Tech\, and is a Faculty Fellow at the Institute for Creativity\, Arts\, and Technology. \n  \nIlia Viazov: Tidal Unit for Sonic Activities\nPerformance-presentation of tusa (Tidal Unit for Sonic Activities). Tusa is a framework for Tidal Cycles live-coding environment that binds together different parts of the application in one Bash executable. It is an attempt to accomplish Tidal Cycles\, expanding it to a software DMI. It seeks to fulfill essential needs during performance with the environment\, keeping the setup very minimal yet sturdy\, while remaining modular and extendable. The framework allows the user access to the interpreter\, text editor\, reference window and server during live-coding practices.\nThe performance is aimed on live-coding improvisation with machine learning tools using spatialisation synthesis techniques. \nAbout the artist\nIlia Viazov (born in 1999 in Voronezh\, Russia) is a composer and sound artist working at the intersection of electronic music\, performance\, self-built instruments\, machine learning\, and software development. His personal and collaborative works have been presented at and supported by Ars Electronica Festival\, platformB Stuttgart\, and Darmstädter Ferienkurse. He is developing the framework tusa for Tidal Cycles live-coding environment\, a terminal implementation that allows the user run it locally\, fully interact with all parts of the environment and extend it. \n  \nNicola Leonard Hein: Rhythmic Traces | Twisted Electronics\nThe piece Rhythmic Traces|Twisted Electronics deals with the question of how the integration of the body and skin resistance into the circuit of an analog synthesizer(Buchla Music Easel) and the connection with a machine learning-based musical agent system(SuperCollider) can change the tonal and rhythmic fluidity of the instrument and develop it beyond its limits. For this piece\, Nicola Leonard Hein uses a unique circuit-bending controller that completely alters the musical reading of the 1970s Buchla Music Easel. Furthermore\, he uses a multi-effect unit programmed in SC and realized with a Bela Microcomputer. Hein’s musical agent learns to interact musically\, creating the music in real time together with Hein on the synthesizer and developing the interaction between a human and a machine musical voice. The systemic economy of movement and the interaction with the AI musical agent create polyphonic rhythmic\, tonal\, and spatial structures. The piece focuses on the emergent Dances of Agency (Pickering). \nAbout the artist\nDr. Nicola L. Hein is a sound artist\, guitarist\, composer\, researcher\, programmer\, and professor of Sound Arts and Creative Music Technology at the University of Music Lübeck.\nHe works with A.I.-assisted human-machine interaction\, postdigital lutherie\, intermedia\, sound installations\, augmented reality\, network music\,and spatial audio. His works have been realised in more than 30 countries\, at festivals such as MaerzMusik Festival\, Sonica Festival\, Experimental Intermedia etc. \n  \nDong Zhou: Found Violin x Aromantic Hobby \nFound Violin is an improvisation system that treats the violin as just one of many sound objects. Since late 2024\, Dong Zhou has started to develop Aromantic Hobby\, a series of strap-on midi controllers. After a few prototypes\, the current controller features a bunny-shaped appearance and wirelessly transmits kinetic data from the wearer to control a chaotic synthesizer. With Found Violin played with the upper body and Aromantic Hobby on on lower body\, the musician plays a duo with themselves. \nAbout the artist\nDong Zhou is a composer-performer based in Hamburg. Zhou gained a B.A. in music engineering at the Shanghai Conservatory and an M.A. in multimedia composition at the Hamburg University of Music and Drama. Zhou won several prizes\, including the first prize in the 2018 ICMC Hacker-N- Makerthon\, the finalist of the 2019 Deutscher Musikwettbewerb\, the Nota-n-ear Award 2022\, and the shortlist of the 2025 Giga-Herz Pop Experimental Production Award. Zhou had works included in the ‘Sound of World’ Microsoft ringtones collection and was commissioned by festivals and institutions such as the Shanghai International Art Festival\, ZKM Karlsruhe\, Stimme X Festival\, etc. Zhou is currently a doctoral candidate in ICAM of Leuphana University. \n  \nOlivier Jambois : Tokens & Strings: an improvisation between an electric guitarist and a local LLM\nThis performance explores real-time co-creation between a human performer and a machine\, specifically investigating the improvisational capabilities of Large Language Models (LLMs) within a musical context. The project originates from an inquiry into the potential of using established LLM architectures —notably the one behind ChatGPT— as responsive improvisational partners. \nA primary challenge in this research is the nature of the LLM: as these models are designed for symbolic processing rather than direct audio generation\, the system must bridge the gap between acoustic signals and semantic analysis. An architecture was developed where the electric guitar’s audio is captured and processed to extract high-level audio descriptors. These descriptors are then sent to the LLM\, which analyzes the performer’s intent and generates a symbolic rhythmic response. This response is mapped to a drum sequencer controlling kick\, snare\, and hi-hat patterns.\nTo address the inherent risks of cloud-based APIs in a live performance environment—such as latency and connectivity instability—this work utilizes a local deployment. While local models often feature a smaller parameter count\, the system has been optimized through careful prompt design and constraint-based logic. This ensures a meaningful rhythmic dialogue while minimizing inference time\, achieving a critical trade-off between algorithmic complexity and real-time musical reactivity. \nIn this performance\, the generative drumming output is routed through a RAVE (Real-time Audio Variational auto-Encoder) module\, developed by IRCAM. By applying neural re-synthesis via a percussion pre-trained model\, the system transforms these source samples into complex\, evolving textures\, moving beyond static playback toward a more sophisticated timbral exploration. Throughout the improvisation\, the guitar signal is processed through custom-designed Pure Data patches\, creating a personal sonic language that oscillates between raw strings and highly transformed textures\, seeking a constant state of flux between contrast and blending with the machine-generated environment. \nAbout the artist\nOlivier Jambois is a guitarist\, composer\, and researcher working at the intersection of acoustic tradition\, analog electronics\, and digital innovation. He holds a PhD in condensed matter physics and a master’s degree in jazz and modern music\, a dual background that defines his analytical yet avant-garde approach to music.\nHe has won the Jazz à Vienne national competition in 2012\, received “Revelation” honors from Jazz Magazine for his album “Les composantes invisibles” and a grant from the Generalitat de Catalunya to support his research into DIY magnetic tape echoes (2023). He has published several albums\, performed at major european festivals. His 2025 release\, Eclosió\, featuring drummer Jim Black\, reflects his ongoing involvement in the contemporary improvisation scene.\nHe is currently professor and researcher at ENTI\, University of Barcelona\, Spain. His research focuses on AI and generative systems. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/evening-concert-5b-lubeck/
LOCATION:Lübeck University of Music: Großer Saal\, Große Petersgrube 21\, Lübeck\, 23552\, Germany
CATEGORIES:15-05,Concert,Excursion to Lübeck,Music
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260516T110000
DTEND;TZID=Europe/Amsterdam:20260516T173000
DTSTAMP:20260505T121343
CREATED:20260421T183226Z
LAST-MODIFIED:20260428T132551Z
UID:10000188-1778929200-1778952600@icmc2026.ligeti-zentrum.de
SUMMARY:Listening Room 2
DESCRIPTION:Fixed Media | Program Overview\nA Voice Intolerable to Heaven and Earth\nFaming Qin \nCollapse\nVarun Kishore \nPaper Wreck\nChun-Han Huang \nRedDeadRouletteReconstruction\nNattakon Lertwattanaruk \nSingularity\nSilvia Matheus \nSpawn\nPaul Oehlers \nTekstil\nNayaka Adinata and Muhammad Welderahmat \nThe Unfinished Drum\nKeming Zeng \nThe Voice of the Tree\nYufen Qiu \nThe Lake Bell\n璟 李   \n  \nAbout the pieces & artists\nFaming Qin: A Voice Intolerable to Heaven and Earth\nA Voice Intolerable to Heaven and Earth is an electroacoustic music that explores the instability and reconstruction of order through sound. The piece unfolds within a fluctuating space between reality and illusion\, where sonic materials repeatedly emerge\, fracture\, and reassemble. Composed in four sections\, it employs the Kyma system for sound synthesis and transformation\, combined with additional processing tools to shape evolving textures. Through cycles of creation and collapse\, resonance is treated not as a fixed outcome but as a temporal trace—an echo that extends beyond linear causality. The work rejects stable form and rule-based structure\, instead allowing sound to drift back into broader currents of time and space. In doing so\, it reflects on sound as both a primordial force and an enduring presence\, probing origins while opening toward indeterminate futures. \nAbout the artist\nFaming Qin (Populian) is an electronic music composer and sound designer born in 2005 who is currently pursuing a Bachelor’s degree in Electronic Music Composition at the Xi’an Conservatory of Music\, China。His creative approach blends elements of realism and impressionism\, transforming everyday sounds into expressive musical narratives. \n  \nVarun Kishore: Collapse\n“There is no punctual moment of disaster; the world doesn’t end with a bang\, it winks out\, unravels\, gradually falls apart [. . .] and all that is left is the consumer-spectator\, trudging through the ruins and the relics.” – Mark Fisher\, Capitalist Realism Collapse draws on aesthetic influences from Brutalist architecture\, the large-scale concrete and metal artworks of Anselm Kiefer\, Urs Fischer’s excavated gallery floors\, and Mark Fisher’s Capitalist Realism. Sonic materials include field recordings of metal drums\, pipes\, cinderblocks\, concrete slabs and other detritus; these were used to generate a collection of phrases via a performable Max patch (a “drunken” Euclidean sequencer of my own design). These phrases were heavily and meticulously chipped away at\, like a sculpture emerging from a solid block of concrete\, forming gestures that transport the listener through sonic ruins falling apart around them. Additional materials include judiciously filtered electric guitar and noise textures\, modular synthesis\, and a single drum machine sample. \nAbout the artist\nVarun Kishore (b.1990) is a guitarist and composer from Kolkata\, India. His work explores interdisciplinary approaches to music technology\, with a focus on building frameworks for composition and improvisation to investigate maximalist methodologies and what he sees as the ‘apocalyptic’ nature of creative practice. Varun’s work has been performed at SEAMUS\, NYCEMF\, Arts Electronica\, and others. Varun is a PhD candidate in Composition & Computer Technologies at the University of Virginia. \n  \nChun-Han Huang: Paper Wreck\nPaper Wreck is a fixed media electroacoustic composition constructed primarily from the sounds of paper materials. Utilizing Foley recording techniques—such as tearing\, friction\, and the destruction of cardboard and paper sheets—combined with vocal elements\, the piece explores the sonic potential of “soft” matter. Through digital signal processing techniques including granular synthesis and cepstral morphing\, these raw\, noise-like textures are transformed into a cohesive musical structure. While the work explores complex spectral textures\, this version is presented in stereo format. \nAbout the artist\nChun-Han Huang (b. 2002) is a composer and sound artist based in Taiwan. He is currently a graduate student majoring in Computer Music at the Institute of Music\, National Yang Ming Chiao Tung University (NYCU). His creative practice focuses on electroacoustic composition and sound design\, exploring the intersection of organic sound sources and digital signal processing. \n  \nNattakon Lertwattanaruk: RedDeadRouletteReconstruction\nEverything heard in RedDeadRouletteReconstruction is reconstructed from either of two sources: the mechanical churn of revolvers sampled from the video game Red Dead Redemption 2 (2018)\, and fragments from Korean pop outfit Red Velvet’s 2016 hit title track Russian Roulette. In their original contexts\, these sounds suggest play—stylized video game violence\, catchy bubblegum pop hooks\, glossy surfaces. Here\, they are recast: disassembled\, distorted\, and folded into a dark irony where the same little details are relentlessly repeated and saturated until they collapse into a violent mess. \nAbout the artist\nNattakon Lertwattanaruk (b. 2006) is a composer and performer originally from Bangkok\, Thailand. His works often engage with the reconstruction of cultural and musical phenomena abstracted from their original contexts\, instruments as a site of physical exploration and extension\, and the integration of multimedia in dialogue with the concert setting. He is a recipient of the Distinguished Prize at the SCG Young Thai Artist Award (2022\, 2024) and has collaborated with ensembles including Tacet(i)\, the Thai Youth Orchestra\, Orkest De Ereprijs\, OSSIA New Music\, Duo Dubois\, and more. His work has been presented at numerous international festivals\, such as the Thailand New Music and Arts Symposium\, IntAct Festival\, Thailand International Composition Festival\, Princess Galyani Vadhana International Music Festival\, and the China-ASEAN Music Festival. Nattakon is currently pursuing a Bachelor of Music in Composition at the Eastman School of Music in Rochester\, New York\, where he studies under Dr. Evis Sammoutis. Previous mentors include Piyawat Louilarpprasert\, Daniel Pesca\, and Mikel Kuehn. \n  \nSilvia Matheus: Singularity\nSingularity seeks a point of convergence within a field of instability. A central melodic line\, generated with a Buchla system\, carries traces of early electronic sound. Around it\, resonant metallic textures emerge and recede. Silence shapes the form\, allowing the melody to fragment and drift into distance. Sound is activated through breath\, using the Kyma system to connect physical gesture and electronic response. The piece moves not toward climax\, but toward disappearance. Singularity seeks a point of convergence within a field of instability. A central melodic line\, generated with a Buchla system\, carries traces of early electronic sound. Around it\, resonant metallic textures emerge and recede. Silence shapes the form\, allowing the melody to fragment and drift into distance. Sound is activated through breath\, using the Kyma system to connect physical gesture and electronic response. The piece moves not toward climax\, but toward disappearance. \nAbout the artist\nSilvia Matheus is a Brazilian composer\, sound artist\, and performer based in the United States. Her work centers on electronic and electroacoustic music\, interactive performance\, and embodied sound practices\, exploring themes of temporal trace\, time\, and physical gesture through live electronics\, sensor-based systems\, and acoustic instruments. She holds an MFA in Electronic Music and Recording Media from Mills College and studied interactive music and performance at the Center for New Music and Audio Technologies (UC Berkeley). Her early training in Brazil included composition studies with Hans-Joachim Koellreutter\, whose experimental approach strongly influenced her artistic development. Since the 1980s\, Matheus has worked at the intersection of score\, instrument\, and technology\, developing interactive systems where physical gesture\, breath\, and movement directly shape sound. Her work has been presented internationally at festivals\, conferences\, and art spaces\, including the International Computer Music Conference (ICMC) in Hong Kong\, New York\, Havana\, Japan\, Denmark\, and Canada. Through solo and collaborative projects\, she continues to create immersive works for improvisation. \n  \nPaul Oehlers: Spawn\nCommissioned to celebrate the fortieth anniversary of the University of Illinois Experimental Music Studios\, Spawn celebrates the legacy of the studios and explores creative ground through its completion. \nAbout the artist\nPaul A. Oehlers is most recognized for his “extraordinarily evocative” film scores. (Variety) Films incorporating his music have won the Grand Jury prize at the Hamptons International Film Festival\, the Atlanta International Film Festival\, and the Indiefest Film Festival. In addition\, films with his music have screened at dozens of festivals in Europe\, Asia\, Africa\, and Australia. Paul A. Oehlers’ compositions have been performed in the United States and abroad including performances at the Society for Electro-acoustic Music in the United States national conferences\, the International Computer Music Conferences\, the Gamper New Music Festival\, the Seoul International Electro-acoustic Music Festival\, the Institut für Neue Musik und Musikerziehung in Darmstadt\, Germany\, and the VII Annual Brazilian Electronic Music Festival\, as well as a 1987 command performance for former United States President Ronald Reagan. He was the first composer ever commissioned by the Nature Conservancy to compose a concert composition about prairie conservation. Paul was named the Margaret Lee Crofts Fellow by the MacDowell Colony for the year 2006. He is currently Associate Professor of Audio Technology at American University in Washington\, DC. \n  \nNayaka Adinata and Muhammad Welderahmat: Tekstil\nTekstil is a fixed-media sound work by Nayaka Farrell and Muhammad Welderahmat is made entirely from everyday sounds recorded in the artists’ surrounding environments. The recordings are not used to document specific places or events\, but as raw sonic material shaped by how each artist listens\, selects\, and arranges sound. Coming from different regions and backgrounds\, the two artists bring distinct approaches to everyday sound\, which are combined to form a shared yet varied sonic vocabulary. \nAbout the artists\nMuhammad Welderahmat is a composer\, live coder\, songwriter\, and improviser born in Palu\, Central Sulawesi (2002) and raised in Parigi Moutong. Active since 2018\, he works across experimental music\, free improvisation\, soundscape\, ambient\, and live coding. He studied with Edy Subianto\, Talis\, Irwan Kurniawan\, and Rangga Purnama Aji. Environmental sound is central to his practice\, engaging with empirical experience\, cultural contexts\, social phenomena\, and critical expression. He has released three albums: Phenomenon (2023)\, Senyap (2025)\, and Menuju Beranda (2025)\, and is an active member of Paguyuban Algorave Indonesia (PAI). \nNayaka Farrell Adinata (b. 2005) is a composer studying at Pelita Harapan University under Stevie Jonathan Sutanto. Trained in classical guitar from an early age\, he studied at Sekolah Menengah Musik (SMM) Yogyakarta before focusing on composition. He has participated in ARTJOG\, OMCM\, and Jogja Noise Bombing\, and received awards including 1st Prize at The Papandayan International Jazz Competition (2023) and the Best New Young Talent Award (2023). His electronic work Lost Contact premiered at the 2025 Immersive Festival in Lisbon. \n  \nKeming Zeng: The Unfinished Drum\nThe Unfinished Drum is an electroacoustic work born from a core philosophical inquiry: what is the ultimate destination of the sound of a drum that is perpetually struck yet never “finished”? The piece deconstructs traditional rhythmic pulses into continuously evolving sonic entities through extreme electronic transformation and spatial reconstruction of acoustic drum sources. These sounds constantly morph on the boundary between “formation” and “dissipation\,” between “signal” and “echo\,” creating an immersive sound field that is both oppressive and meditative. It invites the listener to confront the very existence and evaporation of sound in its absolute state\, experiencing an acoustic ritual without end. \nAbout the artist\nKeming Zeng\, 2002.1.7\, first-year graduate student at the Wuhan Conservatory of Music. \n  \nYufen Qiu: The Voice of the Tree\nThe Voice of the Tree is a fixed media work composed using recorded performances of shakuhachi\, piano\, bass drum\, and timpani. Taking the tree as its central metaphor\, the piece explores the idea of an “inaudible voice”—forms of natural presence that do not produce sound directly\, yet become perceivable through relationships and attentive listening. The work is structured around three interconnected sections corresponding to a tree’s physical form: roots\, trunk\, and branches/leaves. Rather than following a linear narrative\, the composition develops through a mapping between physical morphology and sound strategies. Structural and tactile qualities associated with each element—such as rooting and spreading\, structural support and inner grain\, and extension with breath-like motion—are abstracted and translated into instrumental writing that shapes timbre\, density\, and temporal flow. In the compositional process\, recorded performances of shakuhachi\, piano\, and percussion are further transformed and used as the primary sound materials of the work. As the piece unfolds\, clear instrumental characteristics and compositional traces are retained\, while the sound gradually emphasizes continuity\, internal resonance\, and slow temporal change\, allowing the presence of the tree to emerge through transformation rather than representation. Presented in stereo\, the work maintains melodic contours in the shakuhachi and rhythmic momentum in the piano and percussion\, while placing them alongside sustained resonance and subtle shifts in texture. In this way\, melody and rhythm function both as structural cues and as materials that can be extended and reshaped in post-production\, creating a listening experience that evokes processes of natural growth and flow. Through The Voice of the Tree\, the composer invites the audience to reconsider the relationship between sound\, silence\, and perception\, and to attend to overlooked forms of natural existence. \nAbout the artist\nYufen Qiu is a graduate student specializing in electroacoustic music and sound design. Their work focuses on the interaction between acoustic instruments and electronic processing\, exploring immersive spatialization techniques and experimental sound textures. Currently\, they are developing a series of works centered on environmental themes\, using instruments to mimic natural sounds and investigate new sonic possibilities. Through this approach\, they aim to deepen the connection between music\, nature\, and ecological awareness. While continuing to refine my artistic practice\, they are actively seeking opportunities to present and further develop their work in both musical and academic contexts. \n  \n璟 李 (Li Jing): The Lake Bell\nThis fixed media acousmatic work is crafted based on audio signal processing\, drawing inspiration from the legend of “the ancient bell summoning springs” at Honey Spring Lake. The piece integrates three core sound elements: resonant bell tones that evoke the mythic narrative\, rhythmic drum beats mirroring the laborious digging pace of the people in the legend\, and authentic field recordings of the lake’s surrounding environment to construct a vivid sense of time and place. Through digital signal processing techniques—including spectral distortion\, phase modulation\, and fragmentary deconstruction and reassembly—these acoustic materials are transformed beyond their original forms. By blurring the boundaries between natural soundscapes\, traditional instrumental timbres\, and processed electronic textures\, the work attempts to let sound itself carry the weight of history and collective memory\, even as the physical traces of the legend fade over time. \nAbout the artist\nLi Jing (b. 14.02.2006) is an undergraduate student majoring in Music Acoustics Direction (Electronic Music Production) at Wuhan Conservatory of Music (enrolled 2024). His creative focus lies in the artistic application of audio signal processing in electronic music\, exploring psychoacoustics and auditory illusion experiences. He constructs experimental and immersive electronic music language through spectral shaping\, phase modulation and other technologies. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/listening-room-2-5/
LOCATION:Hamburg University of Technology\, Building A (A 0.14)\, Am Schwarzenberg-Campus 1\, Hamburg\, 21073\, Germany
CATEGORIES:16-05,Listening Room,Music
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260516T110000
DTEND;TZID=Europe/Amsterdam:20260516T173000
DTSTAMP:20260505T121343
CREATED:20260421T190101Z
LAST-MODIFIED:20260428T131553Z
UID:10000179-1778929200-1778952600@icmc2026.ligeti-zentrum.de
SUMMARY:Listening Room 1
DESCRIPTION:Fixed Media | Program Overview\nComfortable Distance\nGiovanni Crovetto \nDisappearing\nMatteo Tomasetti\, Francesco Casanova\, Andrea Veneri\, Vili Pääkkö and Andrea Strata \nHokkaido Snow Soundscape\nZiwei Yang \nImpulse Impromptu III\nTolga Yayalar \n4-body Interactions (7’34’’)\nLeonidas Spiliopoulos \nArchitecture éphémère\nNicola Giannini \nConcerto for Piano and Loudspeaker Orchestra\nNeal Farwell \nCorium II\nMathieu Lacroix \nGott\nRikhardur H. Fridriksson \nMatters 10\nDaniel Mayer \nOBSess\nAllison Ogden \nOscillation of Life\nJan Jacob Hofmann \nNoType\,  algorithmic 3D audio processing\nVilbjørg Broch Phe \nSonic Fragmentation – a fixed media multichannel piece\nDaniel Gomes \nCalling in “Raumforderungen” 8-channel diffusion work\nAleksandar Zecevic and Kiran Bhumber \nFluidante. A quadrophonic recording from the Latent Russando framework\nMartin Heinze \nOdradek\nCristian Gabriele Argento \n  \nAbout the pieces & artists\nGiovanni Crovetto: Comfortable Distance\nComfortable Distance is an electronic music work for fixed media that investigates the relationship between dramaturgy\, timbral transformation\, and perceived spatial depth. The musical discourse is articulated through processes of sound transformation rather than linear narration\, allowing form to arise from the internal behavior of the materials. The piece unfolds through a continuous play of tension and release\, shaped by the dramatic deployment of the full acoustic spectrum and by shifts between moments of expectation and rupture. Sudden impacts alternate with suspended sonic states\, while veiled\, distant sound masses contrast with intimate\, close-up details. These oppositions generate a form grounded in instability\, oscillating between predictability and unpredictability\, and between accumulation and sudden resolution. Conceived for octophonic spatialization\, the work uses multichannel space to support its formal dramaturgy. Spatial depth emerges through perceptual shifts between contraction and expansion\, as well as through abrupt transitions between proximity and distance\, constraint and release. \nAbout the artist\nGiovanni Crovetto (Milan\, 1991) is a composer and educator working in electronic and electroacoustic music\, with a focus on dramaturgy\, perception\, and spatial listening. He studied Composition and Music Theory at the Kunstuniversität Graz and later earned a degree in Musicology from the University of Milan. He is currently completing a Master’s degree in Composition at the Conservatory of Milan and is enrolled in a PhD program in Composition and Musical Performance at the Conservatories of Ferrara\, Pescara\, Trieste\, and Udine. His research explores listening processes and sound transformation across fixed-media and multichannel contexts\, integrating compositional practice with educational work. \n  \nMatteo Tomasetti\, Francesco Casanova\, Andrea Veneri\, Vili Pääkkö and Andrea Strata: Disappearing\nDisappearing is a five-minds spatial composition created during an artistic residency at Laidi Palace\, a historic palace in Latvia. The architecture\, the atmosphere\, and the sense of time suspended within its walls inspired us to imagine a space where presence and absence constantly shift. The piece explores the idea of appearing and disappearing; sounds\, gestures\, and trajectories emerging briefly before dissolving again into the environment. As we worked within the palace\, the building itself became an active presence: rooms\, corridors\, and reverberant spaces shaped the work as much as our intentions did. The piece reflects this dialogue with the place\, blurring the boundaries between visibility and invisibility. Disappearing is an attempt to capture a fragile state of being\, where moments surface\, fade\, and leave only traces behind. \nAbout the artists\nMatteo Tomasetti is a sound artist\, live performer\, and researcher working with spatial audio\, gesture-based interfaces\, and immersive musical experiences. He holds a PhD in Music Technology from the University of Trento and works across electroacoustic music\, sound art\, and audiovisual performance. He currently teaches electronic music at the Music Conservatory of Pescara (Italy). \nFrancesco Casanova is a sound artist based in Graz (Austria)\, active in music software development. His research focuses on sound design\, human–computer interaction\, multichannel audio\, and sound installations. He is currently studying Computer Music at the IEM in Graz. \nAndrea Veneri is an electronic music composer working with audiovisual performance\, live electronics\, and real-time audio systems. His practice centers on sound design and interactive tools developed in Max/MSP. He currently teaches electronic music at the Music Conservatory of Pescara (Italy). \nVili Pääkkö (Vili Aarre) is a Finnish sound artist working in contemporary performing arts\, music\, installations\, and video. He holds degrees in sound design from the University of the Arts Helsinki and has worked internationally with theaters and art institutions. \nAndrea Strata is an Italian multimedia artist and creative coder based in Berlin. With a background in computer music\, he is currently a PhD researcher at the Conservatory of Vicenza\, focusing on human–computer interaction\, movement analysis\, and real-time sound generation. \n  \nZiwei Yang: Hokkaido Snow Soundscape\nHokkaido Snow Soundscape is a fixed-media work based on the winter soundscapes of Hokkaido\, presented through an 8.1ch spatial audio design. All sound materials were recorded on location\, including the pedestrian area in front of Sapporo Station\, a snow-covered park in late-night Otaru\, and the walking trails of Mount Hakodate. These sites form a multi-layered sonic map of how snow exists and transforms across different environments. During field recording\, various methods were used to explore the acoustic expressiveness of snow: rubbing and compressing snow by hand\, footsteps with different shoe materials on icy ground\, and contact sounds created with shovels\, gloves\, and other tools. These approaches extend the expressive range of natural sound material and reveal the diverse timbres and physical “life” of snow. The 8.1ch spatial sound design allows precise placement and movement of sounds\, creating an immersive auditory field where the listener can perceive the flow\, depth\, and ephemerality of snow. The movement\, reflection\, and fading of sound in space are framed as part of winter’s natural cycle—an auditory expression of transience. As both a document of Hokkaido’s winter and a response to the disappearing sounds of nature\, the work preserves these fragile sonic moments in the face of global warming. Hokkaido Snow Soundscape invites listeners to rediscover the purity\, delicacy\, and fleeting presence of winter through sound. \nAbout the artist\nYang Ziwei (b.1999\, Hunan\, China) graduated from the Music Technology Department of Xinghai Conservatory of Music in 2021 and is currently pursuing a master’s degree in Music and Sound Design at Senzoku Gakuen College of Music\, Japan. He has studied composition and electronic music with Yoshihiro Nakagawa\, Takeyoshi Mori\, and Chang Lin. His works have been selected for events such as MUSICACOUSTICA-HANGZHOU 2024\, IEMC2024\, ICMC2025\, and CCMC2025. His research focuses on urban soundscapes and soundscape composition. His creative work investigates methods of recording\, processing\, and spatially reconstructing environmental sounds\, exploring sonic memory\, cultural symbolism\, and cross-cultural perspectives within contemporary soundscape practice. \n  \nTolga Yayalar: Impulse Impromptu III\nImpulse Impromptu III (2025) is an electroacoustic work based entirely on the sounds of two mechanical musical boxes. The piece investigates the musical box as both instrument and object\, focusing on its dual identity as a nostalgic sound source and a fragile mechanical system. Rather than treating the musical box solely as a melodic device\, the work explores its full sonic spectrum\, including pitched material\, mechanical noise\, creaks\, clicks\, and friction sounds. The compositional process is rooted in improvisation and unfolds in two distinct stages. The first stage consists of extensive acoustic improvisations with the musical boxes\, employing both conventional and unconventional techniques such as winding\, tapping\, scraping\, and manual interference with the mechanisms. These recordings function simultaneously as documentary material and as a sonic reservoir. In the second stage\, selected recordings are transformed through sampling and electronic processing to create custom virtual instruments\, enabling a further layer of improvisation within an electroacoustic context. Through this process\, the musical box is recontextualized as an immersive sonic environment rather than a fixed sound object. The piece aims to evoke the perceptual sensation of being “inside” the instrument itself\, as if the listener were miniaturized and placed within its inner workings. This perspective highlights the intimacy\, precision\, and instability of the mechanism\, revealing an eerie and delicate sound world that oscillates between familiarity and estrangement. Beyond its material exploration\, the work engages with themes of memory\, fragility\, and nostalgia. The musical box functions as a metaphor for recollection: precise yet vulnerable\, repetitive yet prone to degradation. By magnifying its internal sounds and spatializing them in the electroacoustic domain\, the piece reflects on the tenuous relationship between mechanical repetition and the emotional resonance often associated with remembered sound. \nAbout the artist\nTolga Yayalar (b. 1973) is a composer whose works have been performed by ensembles such as Le Nouvel Ensemble Moderne\, Alarm Will Sound\, and the Orchestre National de Lorraine\, and presented at festivals including MaerzMusik Berlin\, Ars Electronica\, and Acht Brücken Köln. He has received numerous composition awards and collaborated with choreographer Korhan Başaran on interdisciplinary projects. Yayalar holds degrees from Berklee and Istanbul Technical University\, earned a Ph.D. from Harvard\, and teaches composition at Bilkent University. \n  \nLeonidas Spiliopoulos: 4-body Interactions (7’34’’)\nA generative composition exploring the infinite possible interactions between four bodies entangled according to Newton’s law of universal gravitation. Each movement (or possibility) presents how four bodies can interact sonically and spatially in significantly different ways. Each body is represented as a different instrument orbiting in three-dimensional space embedded in an ambisonic sound field\, with the audience placed at the center of mass of the whole system. The coordinates and velocities of the bodies simultaneously modulate key parameters of the synthesized instruments\, such as amplitude\, pitch\, filtering\, frequency modulation\, etc. The system of equations of the four bodies permits a wide range of diverse dynamics\, ranging from periodic\, cyclical behavior to aperiodic and chaotic behavior\, which is highly dependent on the initial conditions of the system. The system for four bodies cannot be solved analytically\, but must be approximated instead\, leading to uncertainty from the perspective of the observer despite the underlying deterministic structure. Small errors compound over time leading to a significant divergence between our predictions and reality\, revealing our uncertainty about the future and the inherent limits of our roles as observer. The interactions of the four bodies in this composition reveal qualitatively different relationships between them\, representing the multitude and diversity of human interactions and patterns of participation generated by our attractions and proclivities to each other. In Movement/Possibility 1 all four bodies interact loosely as a single group with evolving fidelities revealing the complex interactions between human relationships. In Movement/Possibility 2\, three of the four bodies form an interaction group closely orbiting each other. One body has a unique trajectory\, initially moving away from the group\, gradually reversing course and even briefly interacting with the group\, but then temporarily escaping their pull to return on a solitary path. In Movement/Possibility 3\, the four bodies interact closely in two pairs. Within each pair the bodies exhibit tight coupling\, but the two pairs are on divergent solitary paths becoming increasingly estranged and polarised. \nAbout the artist\nLeonidas Spiliopoulos is an academic researcher at the Max Planck Institute for Human Development specialising in mathematical models of individual and strategic decision making and learning. His research is grounded in the inter-disciplinary insights afforded by the fields of economics\, game theory\, cognitive psychology/neuroscience\, and artificial intelligence. He is a keen explorer of the intersection of science and art\, particularly electroacoustic and generative music. \n  \nNicola Giannini: Architecture éphémère\nArchitecture éphémère is a fixed-media work that creates an immersive experience in which listeners can lose themselves in the time and space of the music. The title refers to the idea that spatial music can generate an ephemeral\, constantly evolving architecture that overlays the physical environment. Drawing on philosopher Gernot Böhme\, my practice explores how the diffusion of sound shapes the atmosphere of the spaces we inhabit. The piece explores tensions between opposing spatial sensations—proximity and distance\, intimacy and immensity—and the depth of field of the sonic space. Most of the materials are generated through sound synthesis\, which I use to create sounds with different shapes\, sizes\, and densities. Architecture éphémère unfolds as a journey through distinct atmospheres\, shifting from explosive\, detailed passages\, where trajectories can be pinpointed\, to soft\, diffuse\, hypnotic spaces that verge on disorientation. The opening section presents a frontal sound mass that slowly advances\, suggesting immensity and inexorability\, before bursting into aggressive gestures that travel along multiple trajectories\, as if caught in an explosion. Wide-range glissandi\, projected into a rich ambisonic reverberation\, reinforce the sense of motion and tension\, immersing the audience in a deep and articulated sound field. A second phase is inspired by the image of a vast snowy expanse whose boundaries remain invisible. Here\, slowly evolving sine waves\, often difficult to localise\, create an ambiguous\, enveloping atmosphere. Moving on circular trajectories at slightly different speeds\, they intersect to form chords\, clusters\, and beating patterns\, some in very low registers that engage the body as much as the ear. Occasional sharp editing cuts introduce subtle spatio-temporal breaks\, further intensifying the hypnotic effect. In the final section\, perception seems to fragment gradually. Different sound materials are spectrally split and rotated in space at slightly offset speeds\, a technique inspired by Robert Normandeau\, enveloping the listener in a hypnotic conclusion. Architecture éphémère was initially composed during the workshop series Composing Fixed-media Multichannel Music on a Hybrid Loudspeaker Array led by Pierre Alexandre Tremblay (2022–2023) at the Multimedia Room of CIRMMT in Montreal\, and reworked in 2025. \nAbout the artist\nNicola Giannini is an artist-researcher who creates immersive sound experiences. His practice lies at the intersection of experimental music\, sound art\, collaborative practices\, and creation in public space. He designs sound spaces as ephemeral architectures where intensity and lightness\, the real and the surreal\, the natural and the synthetic intertwine. His works have been presented in North and South America\, Asia\, Australia\, and Europe. He holds a doctorate in composition from the University of Montreal and is a postdoctoral fellow funded by the FRQSC at UQAM and McGill. \n  \nNeal Farwell: Concerto for Piano and Loudspeaker Orchestra\nI had the opportunity to write a concerto for a remarkable soloist; and\, with it\, came the idea of writing for loudspeaker orchestra. A simple pivot in the word ‘orchestra’\, from the usual concerto accompaniment\, it opens up the possibility of sound spatialisation\, the challenge of melding that to a stage-bound piano\, and a fluid conception of the orchestral sound-world. In principle\, computer music can contain and mediate any kind of sound material — but in practice we seem to draw lines\, for instance between soundscape; acousmatic music; algorithmic composition; and the digital instrumentation of media composers. I wanted to move fluidly between these\, and to have fun\, while still writing a ‘serious’ piece. My soloist is an enchanter\, drawing energies between varied worlds. The original programme mentions that ‘the orchestra of loudspeakers gives the piano the possibility to take wing; and the orchestra in the loudspeakers lets it dance.’ The most literal sense of the piano taking wing is in the form of peeling bells in an imagined townscape\, a point of arrival after the journey from a naturalistic (but also constructed) woodland dawn. Not-quite-real instruments sing\, dance\, and morph. The programmatic element is worn on the concerto’s sleeve; but it is only one part of a musical argument involving sonic and material transformation. The piano is purely acoustic. Its pitch world is an innovative constructed tonality. It treats 19 semitones as its interval of equivalence (the interval from fundamental to third harmonic) and downplays octave chroma. It gives systematic possibilities akin to common practice harmony\, but able to sound very different while still suiting the keyboard. Computer models helped develop the material. The electronic materials are cast in 7.0.4 surround (i.e. 7.0 plus 4 height). They comprise multiple sound cues that overlap\, segue\, interrupt\, etc.; and a custom performance environment built in Max. All the material is pre-composed\, made in a studio with physical surround monitoring\, but reshaped within the performance tool. There is particular attention to performance adaptability. Some passages are ‘placed’\, with the soloist leading. In others\, the computer performer must conduct\, interpreting animated cues from the software UI to ensure sounding ensemble. There is an extended duet in which the computer performer plays ‘tablature’ patterns on MIDI keyboard\, via a layer of anticipatory score following (using Ircam antescofo)\, to interpolate multiple elements of the rhythmic dialogue in flexible tempo. These shifting strategies reflect the relationship of soloist and conductor in a work with live orchestra. The full Concerto plays in a single span: Part 1: — I. Invocation — II. Circles — III. Image — IV. Incantation: cadenza — V. Flight Part 2: — VI. Arc — VII. Nocturne — VIII. Envoi To meet duration constraints for ICMC\, Part 1 is proposed as a stand-alone performance\, with a shortened cadenza and a short alternate ending. We gave the first concert performances in 2024. The submitted version is a demo with a digital realisation of the piano part. Binaurally rendered\, for a hint of the spatialisation. \nAbout the artist\nNeal Farwell composes acoustic\, acousmatic\, and mixed electroacoustic music. He gained his PhD in composition from the University of East Anglia\, studying with Simon Waters. In 1998 Neal moved to the USA as a Knox Fellow at Harvard University\, and continued his studies with Bernard Rands\, Mario Davidovsky and David Rakowski. Since January 2002\, Neal has taught at the University of Bristol\, UK\, where he is Professor of Composition. Neal is active also as a performer\, regularly conducting the University Symphony Orchestra\, working with outside ensembles\, and presenting the electroacoustic concert series Sonic Voyages. \n  \nMathieu Lacroix: Corium II\nCorium is a material that is created from a nuclear meltdown accident\, such as during the Chernobyl accident. Its texture looks similar to molten lava and it may be heated up to 2 500 degrees Celsius. The radiation is so intense that even decades later it can distort photographies and instantly kill. This is the second and final piece in this series. The piece attempts to combine aesthetic aspects of extreme drone metal with the harmonic subtleties of contemporary music. The sound sources are mainly from a Warr guitar (a touchstyle instrument similar to Chapman stick). \nAbout the artist\nMathieu Lacroix is a French-Canadian composer and music producer working in Norway. He has studied and/or worked with composers such as Hans Tutschku\, Kaija Saariaho\, Jaime Reis\, Ståle Kleiberg\, Trond Engum\, Michael Obst\, Markus Reuter\, and Annette vande Gorne. He completed his studies at NTNU in Norway\, IRCAM in France\, and Musiques & recherches in Belgium. He has been invited to festivals such as Mixtur\, Meta.Morf and Manifeste. His music is performed in over fifteen countries. He is a member of the Electric Audio Unit with Natasha Barrett and Ernst van der Loo. In 2021 he completed a PhD thesis on synchronization strategies in mixed music. He also plays Chapman Stick\, and works as a producer and sound engineer. He is an associate professor in composition and music production at University of Inland\, Norway. \n  \nRikhardur H. Fridriksson: Gott\nGott (2022) is a drawn-out rendering of one spoken sentence saying that my old hometown is a good place to live in. This was a famous quote from a former major of the town. When that same town\, many years later\, commissioned a piece from me\, I thought of this sentence. Of course as a way of flattering my benefactors\, but also as a way af expressing my fond memories of growing up there. The drawing-out of the sentence is far from being plain time stretch. I use the opportunity to play freely with variously big bits of words and letters. \nAbout the artist\nRikhardur H. Fridriksson (b. 1960) studied composition in Reykjavik\, New York\, Siena and The Hague. His music falls into two general categories; he either makes pure electro-acoustic music\, working with natural sounds and their movement in space\, or he does live improvisations\, playing electric guitar\, processed with live electronics\, either alone or with the Icelandic Sound Company. He teaches composition and electronic music at Kopavogur Music School. In his spare time he plays punk rock. \n  \nDaniel Mayer: Matters 10\nMatters\, a series of electro-acoustic multichannel pieces\, started in 2017. It reflects my practice-driven research\, where I’m artistically exploring various sound synthesis and spatialization variants. Matters 10 uses buffer rewriting\, the simultaneous reading from and writing to the same buffer with varying speeds. The result of this procedure is highly unpredictable. However\, algorithmic control of the various parameters is employed to contain the output and produce the formal structure and spatial distribution. Simultaneous writing to and reading from an audio buffer is a simple though widely unknown and undocumented idea\, which can lead to a wide range of surprising results. It continues the tradition of non-standard synthesis\, and there are only scattered hints at individual approaches. When buffer writing and reading to a buffer are performed under “ideal” conditions – equal rates\, writing before reading – the procedure results in a simple delay line. Things become interesting if rates are unequal or modulated. Then\, the delay is disturbed\, and the sounding result might include alias-like effects and glitches. Rate modulation can lead to audible sidebands\, thus mixing the concepts of buffer rewriting and buffer modulation\, which\, on its own\, is also an easy\, effective\, and underestimated processing technique. Feedback or overdubbing instead of plain rewriting are further possible extensions of the procedure; playing with the bounds of the buffer section is another one. Short impulses as input can lead to compelling resonance effects. As any input signal can be a source for buffer rewriting – and no-input variants with feedback are equally possible – it becomes clear that the variance of results is large. \nAbout the artist\nDaniel Mayer (*1967) works in the area of sound synthesis and generative computer algorithms. Performances at numerous festivals of electronic and contemporary music\, Giga-Hertz production prize 2007 at ZKM Karlsruhe. Completed studies of pure mathematics\, philosophy and composition (Gerd Kühr) in Graz. Postgraduate study at ES Basel with Hanspeter Kyburz. Visiting professor for electro-acoustic composition at IEM Graz. Edgard-Varèse guest professor of DAAD at TU Berlin in the winter 2022/23. \n  \nAllison Ogden: OBSess\nOBSess Program Notes: The origins and title of this piece were entirely unintentional. While brainstorming ideas for a different composition\, I began experimenting with oboe samples\, applying various filters to them. I named the patch “OBSess” (Oboe + Session) without much thought. However\, I found myself repeatedly returning to this patch\, going down the well-trod computer music rabbit hole of “What if I…?”. It wasn’t until I had accumulated several minutes of material that I noticed the double meaning of “OBSess\,” and it felt fitting. The piece is constructed by filtering and deconstructing oboe samples\, exploring a cycle of reconstruction and re-deconstruction. My initial curiosity was to see if I could sonically rebuild a “giant oboe” from its fragmented sounds in an immersive 8-channel setup\, simply because it seemed like a fun challenge. The process then evolved into further deconstructing the sound\, leading to a playful exploration of construction\, deconstruction\, and reconstruction. Ultimately\, this piece is about the joy of creation and experimentation—it was genuinely a lot of fun to make. OBSess was composed in the spring and summer of 2024. \nAbout the artist\nDr. Allison Ogden is a composer\, teacher and author who currently works as an Assistant Professor at The University of Louisville. She received her BM from The Eastman School of Music and her PhD from The University of Chicago. She now considers herself and “re-emerging composer”\, as she took time away from composition to be a mother to her two children\, and now seeks to raise awareness of the difficulties faced by those in the creative fields who need to step away due to child care\, elder care\, health concerns or other reasons\, issues faced more commonly\, though not exclusively\, by women. Allison’s music has demonstrated a connection to the natural environment\, with astronomy\, nocturnal experiences and light pollution in particular being of prime focus. Working in both acoustic and electroacoustic realms\, her music focuses on subtle textural shifts\, sonic soundscapes\, meditative and immersive acoustic spaces. As a Professor\, Allison has worked to expand music course offerings at the University of Louisville and to make the music studied more inclusive and reflective of a modern\, global society. A longtime fan of Hip Hop\, she created the very popular Hip Hop: Music and Culture course at UofL and has lectured at Universities and Colleges in the United States and Europe on the intersection of Hip Hop and social justice movements. In August 2025 her college-level textbook\, entitled Come Correct: A Comprehensive History of Hip Hop Music\, was published. \n  \nJan Jacob Hofmann: Oscillation of Life\nThis is an electroacoustic work in 7th order Ambisonics\, fixed media. For this venue a 3rd order decode for has been provided. The piece is about the generating forces of nature. About the idea of an underlying universal power that gives shape and energy to all living beings. What if there was a yet undiscovered oscillating energy beyond acoustic and electromagnetic oscillation\, that gave shape\, energy and interconnection to all living beings? That enabled/guided/facilitated the organisation of molecules and cells to higher organisms\, beyond genetic chemical reactions and metabolism\, opposed to the common increase of entropy? That creates shape like symmetry up to far more complex mathematical order\, beauty out of chaos by transmitting harmonic information? What would that oscillation sound like\, if we could perceive it? Would we listen? Would we be able to tune in? The piece is spatially encoded in 7th order Ambisonic. The sounds and the spatial design were created with the sound synthesis program “Csound”. Other programs were “Cmask” and “Blue”. \nAbout the artist\nJan Jacob Hofmann. Born 1966 at Duesseldorf\, Germany. Diploma in architecture 1995. Entered the class of Peter Cook and Enric Miralles at the Staedelschule Art School Frankfurt am Main in 1995\, a postgraduate class of conceptual design. Diploma in 1997. Works as a composer\, photographer and architect since. Since 1986 dealing with composition and electronic music. Since 1999: Work on spatialisation of sound. Several international performances since. Own research on Ambisonic and other spatialisation techniques. \n  \nVilbjørg Broch Phe: NoType\,  algorithmic 3D audio processing\nAn immersive audio work for computer processed voice. Spatialization and other audio processes take place through a gigantic audio effect wave guide mesh structured after the 8D hypercube. The text fragments are cutups from present scientific publications on genetics and computational biology. \nAbout the artist\nVilbjørg Broch Phe. Born in 1967 in Denmark. Lived in The Netherlands for several decades but I am now based in Denmark. Studies include dance and improvisation at the SNDO Amsterdam and voice with coloratura soprano Marianne Blok. Worked with multi media and improvisational projects of all sorts and sizes the past 30 years. Projects include interpretations of a wide variety of text sources. I work with computer music for a bit more than 20 years. The development of this has been parallel with a self study of pure mathematics aimed at algorithmic composition and DSP. The work in spatial audio has developed thanks to working periods and residencies in places such as CCRMA Stanford\, IEM Graz\, ICST Zurich\, EMS Stockholm and NOTAM Oslo. \n  \nDaniel Gomes: Sonic Fragmentation – a fixed media multichannel piece\nThe piece explores the relationship between human and machine in artistic creation\, focusing on human decision-making in performance\, synthesis\, and media isomorphism. It suggests that technology and artistic performance are best unified through perceptual understanding\, balancing automated processes with human interaction. Glass and tile shards were chosen as sound objects. Though not naturally resonant\, these materials enabled exploration through vibration and human manipulation\, bridging physical objects with digital sound. Performance and improvisation were crucial in shaping the piece’s structure\, with tonality and gesture determined by performer’s choices and technique. Controlled sound events combine live performance with partial automation. Two key algorithms shaped the digital soundscape: Chebyshev’s Polynomials for filter design\, optimizing frequency selectivity and ripple control\, and the Sieve of Eratosthenes for prime sample intervals\, enhancing sound fidelity. Spatial reference was essential for distinguishing individual synthesis streams. The glass shard motif served as the primary interaction model\, with tile fragments helping define sound event morphology. The concept of linearity guided the overall form\, using sampling as an isomorphic representation. This framework allows various media to be projected through vector matrices across different spectra while preserving their essential characteristics and artistic integrity. \nAbout the artist\nDaniel Gomes is a Lisbon-based web developer\, fusing his passions for programming and digital art. His current focus lies in exploring computer music\, with a particular emphasis on real-time paradigms in digital media\, using sound as the primary medium for music synthesis. He holds a Master’s degree in Sonic Arts from the Sonic Arts Research Centre in Belfast. While engaged in his work and creative pursuits\, he also served as a peer reviewer for the ICMC panel. His musical works have been showcased in diverse locations\, ranging from Portugal to Paris (INA/GRM) and from Germany (ZKM in Karlsruhe) to international events such as ICMC 2018 in Daegu\, Korea\, and NYCEMF. Recently\, he has been delving deeper into the realms of digital arts and the aesthetics of music. \n  \nAleksandar Zecevic and Kiran Bhumber: Calling in “Raumforderungen” 8-channel diffusion work\nGerman experimental musician Sascha Stadlmeier\, founder of the Emerge label\, recorded various physical interactions and ambient room tones inside the massive gas tank at Gaswerk in Augsburg\, Germany. With his permission\, I used these recordings as source material to create a musique concrète composition titled Calling in Raumforderungen. The term Raumforderung—German for “space-occupying lesion”—is used here in a non-medical\, metaphorical sense. It refers to instances in which something occupies or asserts its presence within a given space\, whether physical\, conceptual\, or acoustic. In this composition\, that “something” is sound. Guided by a leitmotif of call-and-response\, the work explores how sound waves interact with and are shaped by the gas tank’s vast\, reverberant interior. The piece invites listeners to consider not just the sound itself\, but the space it inhabits—and how that space responds. \nAbout the artists\nAleksandar Zecevic is a sound artist\, audio designer\, electroacoustic composer\, interactive audio specialist\, and researcher. In his interactive and linear audio works\, he uses a variety of spatial audio techniques to extend sonic narratives and temporal experiences. Upon completing a music conservatory and a technical college in 1986\, he began working at Radio Television Belgrade\, Studio B\, Radio Belgrade Studio for Electronic Music\, and the Belgrade National Theatre. Under the mentorship of the Belgrade University Professor of Sound Design and Radiophonic Art\, Zoran Jerković\, he continued his education in the theory and praxis of sound design\, recording\, and electroacoustic art until his departure to Canada in 1992. In Canada\, he has been working as a freelance Sound Engineer\, Audio Designer\, Sound Artist\, Spatial audio specialist and Electroacoustic Composer on film\, television\, multimedia\, and performance projects. Aleksandar held the following positions: 1998 – 2018: Artistic and Technical Senior Sound Artist and Audio Director for Interactive Audio at Electronic Arts Canada. 2020 -2024 -Audio Director at Archiact inc Presently\, he is the Audio Director at Lakshya Digital His works have been presented at Phonurgia Nova\, MUTEK ( SAT )\, Gran Prix Nova\, EPICENTROOM\, PAYSAGES | COMPOSÉS\, FESTIVAL ECOS URBANOS\, Radiophrenia\, and Radio Belgrade 3. \nKiran Bhumber \n  \nMartin Heinze: Fluidante. A quadrophonic recording from the Latent Russando framework\n“Latent Russando” is a semi-generative compositional framework written in Pure Data dedicated to exploring musical qualities in working with generative neural nets for audio\, conceived both as hybrid instruments and as autonomous actors. Practices from generative music and algorithmic composition are used as mediators between human performer and the generative abilities of the neural nets\, displacing and circumventing concepts of authorship and genius by empowering multiple independent agents in an improvisation-driven\, co-creative process. The work is based on “Russando. Serenade for six German Sirens\, op. 43” by Hallgrímur Vilhjálmsson\, a heteronym of conceptual artist Georg Joachim Schmitt. The original piece was composed in 2008 and premiered in the context of the (also fictional) art exhibition “cologne contemporary — international art biennale 08” at Asbach-Uralt Werke in Rüdesheim. It is a three-part composition of approx. 33 minutes in length\, in which six German emergency and police sirens are alternately sounded together or alone. In consultation with the creator\, I trained neural audio models based on two architectures (RAVE\, vschaos2\, both courtesy of IRCAM\, Paris) on the original piece. For the ICMC 2026 Music Track\, I configured the “Latent Russando” framework into a quadrophonic version employing 8 model instances (each 4 RAVE and vschaos2) with their outputs distributed over all channels. My application contains the piece “Fluidante” that stands exemplary for a potentially infinite number of musical works that can be generated with this framework; it is the output of a joint creative act of human and artificial agents. With this\, both the conceptual genesis of Russando with its distributed or fictionalized authorship is reflected as well as the interplay of control and autonomy in a process that deflects claims of unique authorship and concepts of solitary genius. \nAbout the artist\nMartin Heinze is a sound artist\, composer and musician working in the field of experimental electronic music with a focus on algorithmic composition and generative neural audio synthesis. Part of his work revolves around injecting concepts of generative music and algorithmic composition into deterministically driven electronic music genres. Another practical research interest of his is integrating generative AI into creative processes in electronic music production holistically. \n  \nCristian Gabriele Argento: Odradek\nOdradek is a reflection on growth as transformation through mutation. The piece is composed exclusively from a single sonic organism: Valse Sideral (1962) by Jorge Antunes. This source was subjected to extreme AI-based cleaning processes\, not to clarify the signal\, but to harvest what was left behind: the residual noises\, the erased interference\, the discarded fragments. From this paradoxical gesture—a search in the margins—emerges a timbral proliferation: unstable\, shifting\, where the material fragments and regenerates into constantly changing forms. Like a living being adapting to survive\, the sound grows not by accumulation\, but through distortion\, error\, and adaptation. Far from a linear model of development\, Odradek enacts dysfunctional growth: glitch like persistence\, a rhythm that emerges and dissolves\, an identity that loses itself to become something else. The title refers to Kafka ’s enigmatic creature—an unclassifiable entity with no clear function and no end—a metaphor for a sonic lifeform that resists categorization and complete intelligibility. Ecologically\, Odradek offers a sonic metaphor for biodiversity. From a single source\, it generates an ecosystem of micro-events—competing\, overlapping\, coexisting. It is not a representation of biodiversity\, but its enactment: through the multiplication of differences\, through tension between form and disintegration. In a time when growth is often misunderstood as unchecked expansion\, Odradek explores a model of growth rooted in ambiguity\, instability\, and crisis. It grows not by colonizing space\, but by making it fertile for new perception. A sound that thrives by becoming less legible\, more complex—an auditory organism evolving through entropy. \nAbout the artist\nCristian Gabriele Argento\, Italy\, Electroacoustic composer. Born in Catania in 1998\, Cristian started to make music as a self-taught at the age of 14. His interest in new technologies applied to music was born in high school\, studying subjects such as electronic and computer science; during this period he did some extra school courses about new technologies and electronic music. After his high school studies he decided to make of electronic music his future so he decided to enroll at the conservatory of Palermo. Currently he attends the second year of the Master course of electronic music at the conservatory of Palermo in the class of Giuseppe Rapisarda. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/listening-room-1-5/
LOCATION:Hamburg University of Technology\, Building A (A 0.18)\, Am Schwarzenberg-Campus 1\, Hamburg\, 21073\, Germany
CATEGORIES:16-05,Listening Room,Music
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260516T133000
DTEND;TZID=Europe/Amsterdam:20260516T150000
DTSTAMP:20260505T121343
CREATED:20260421T163825Z
LAST-MODIFIED:20260504T085501Z
UID:10000105-1778938200-1778943600@icmc2026.ligeti-zentrum.de
SUMMARY:Lunch Concert 6A
DESCRIPTION:Concert 6A forms a bridge between the distant past and a radically digital future. It is a search for the edges of the audible—whether in the almost imperceptible silence of a saxophone\, in the raw 1-bit synthesis of early computer music pioneers\, or in the lament of the Gorgons on a reconstructed ancient instrument. \nThis Lunch Concert is open to the public. Those without a conference pass can purchase a ticket here. \n  \nProgram Overview\nApparizione del Silenzio \nYisong Piao \ntibone\nKerry Hagan \nA Hatful of Feathers \nMarc Ainger \nchime\nTiffany Skidmore and Patti Cudd \nCyanotypes\nPatti Cudd \nGorgons’ Cry\nKonstantinos Karathanasis \nStates of Water\, i. Prologue\nZouning Liao\n  \nAbout the pieces & artists\nYisong Piao: Apparizione del Silenzio  \nApparizione Del Silenzio does not contain “silence” itself—at least\, not in the conventional sense of an absence of sound. Instead\, it is built upon sounds that lie at or beyond the threshold of human perception: vibrations outside the usual spectrum\, the friction between air and metal\, the dissipation of sound waves in space—those margins of sound that are ignored\, inaudible\, yet undeniably existent. The apparition of silence is therefore neither stillness nor emptiness\, but the manifestation of a presence perceived as silence. It is a non- sonic sound: at the limit of hearing\, silence ceases to signify absence and becomes another mode of existence.\nThe piece is written for tenor saxophone and electronics\, combining fixed media with live processing of hyper-amplified micro-sounds from the instrument. Semi-improvised passages invite the performer to enter the interstice between sound and silence\, where breath\, touch\, and hesitation become part of an almost inaudible voice.\nThe generative logic of the work is not the appearance of silence\, but its presentation: silence here is not what is conventionally called “silence\,” but a subject that reveals itself through its auditory traces. \nAbout the artist\nYisong Piao (b. 1992\, China) is a Seoul-based composer specializing in electroacoustic and instrumental music. His works have been presented at ICMC 2023 (China)\, ICMC 2024 (Korea)\, and ICMC 2025 (Boston). He is a researcher at the Center for Research in Electro-Acoustic Music and Audio (CREAMA)\, focusing on microtonality and algorithmic approaches in composition. \n  \nMiller Puckette and Kerry Hagan: tibone \nKerry Hagan presents an improvisation on 1-bit synthesizers. Rather than pursuing chip tunes or similarly low-bit music\, she navigates a range of possible timbres in an exploratory performance. \nAbout the artists\nMiller Puckette and Kerry Hagan began focused collaborations on academic and musical projects in 2014. Together their duo has performed in North America and Europe. They have introduced novel synthesis algorithms through new performances. Their work explores timbre\, spatialization\, real-time computer processes\, algorithms\, interaction design\, performance practice\, and performance systems. \n  \nMarc Ainger: A Hatful of Feathers\nIn A Hatful of Feathers for Alto Flute and Computer\, the flutist creates a music in realtime that is informed by expanded possibilities\, using traditional and extended techniques. The work builds from Willian Sethares’ research into spectra and tuning.\nThe computer analyzes the pitch\, amplitude\, and spectral content of the flute playing (including all of the sounds created by the mechanism of the flute\, such as the sound of the keys)\, interacting with the live sound in various ways (stretching/contracting and/or spatializing various spectra\, retuning spectra\, granulating and creating micro-glissandi\, etc). We use a custom Max/Msp patch using some well-known spectral and spatial techniques\, along with some extensions of these techniques. \nAbout the artist\nMarc Ainger (USA) has developed an idiosyncratic body of work that embraces a wide range of music/sound and music/sound-making. He is interested in the relationships between the real and the imagined – the ways in which the visceral world of sound and sound production inform our imagined worlds of sound\, and the ways our imagined worlds\, in turn\, inform our concrete experiences.\nPerformances of Ainger’s works have included the New York Philharmonic Biennial; the INA/GRM; the Royal Danish Ballet; CBGB; Late Night with David Letterman; the Goethe Institute; the American Film Institute; SIGGRAPH; the Palais de Tokyo (Paris); FolkwangWoche NeueMusik(Essen); Gaggego!(Gothenburg); the Joyce Theater (New York); Guangdong Modern Dance; and New Circus artists. Awards include the Boulez/LA Philharmonic Composition Fellowship\, the Irino International Chamber Music Competition\, Musica Nova Prague\, Meet the Composer\, and the Esperia Foundation. \n  \nPatti Cudd: chime\nPatti Cudd performs “chime\,” for percussion and fixed media\, composed for her by Tiffany M. Skidmore. “chime” requires 2 snare drums\, 6 crotales\, 12 distinctive beaters\, and 2 bluetooth bone conduction\, wireless speakers. Each speaker is affixed to the underside of one snare drum. All 6 crotales are placed on a single drumhead. The performer plays a complex series of patterns moving between bare drumhead and unmoored crotales using combinations of beaters. Mechanistic\, unpitched patterns begin to merge with melodic\, pitched elements that sometimes bend to ultimately become a metallic wall of overtones as the line between electronic and live acoustic sound comes into and out of focus. This piece was premiered by Cudd at the VT New Music + Technology Festival in May 2023\, ICMC represents the premiere of a revised version of the electronics and the first time Patti will use the bone conduction speakers that were originally intended for this piece. \n“chime” happens on three planes: a long\, liquidating chiasmus meets two rotating pitch constellations. \nAbout the artists\nComposer/Associate Director of the Mizzou New Music Initiative Tiffany M. Skidmore has held faculty positions at the University of Minnesota\, Virginia Tech\, and the University at Buffalo (SUNY)\, where from 2023-2024\, she held the Birge Cary Chair in Music Composition. In 2025\, she was Visiting Professor at McGill University\, in residence at the Centre for Interdisciplinary Research in Music Media and Technology. She is Co-Founder\, Executive Director\, and Artistic Director of 113\, producing the Twin Cities New Music Festival\, guest residencies\, and concerts throughout the world. \nDr. Patti Cudd is active as a percussion soloist\, chamber musician and educator. Patti is a member of the acclaimed new music ensemble\, Zeitgeist. Her other diverse performing opportunities have included CRASH\, the Minnesota Contemporary Ensemble\, Minnesota Dance Theatre and the Borrowed Bones Dance Theater.\nAs an active performer of the music of the 21st century\, she has given concerts and master classes throughout North America\, Asia\, Europe and South America. As a percussion soloist and chamber musician\, she has premiered well over 200 new works. \n  \nPatti Cudd and Marc Ainger: Cyanotypes \nCyanotypes\, with their characteristic white imprints on a deep blue field\, transcend mere photographic representation; they serve as blueprints that reveal the essence of objects through their negative form. This transformative process redefines the concept of the “object\,” not as a fixed entity\, but as an echo\, a trace\, or an imprint of presence. In this conceptual framework\, cyanotypes become a metaphor for the translation of physical and temporal phenomena into abstracted impressions. Inspired by this principle\, Cyanotype’s Five Studies approaches the vibraphone not through its direct sound or physicality\, but as a series of rhythmic imprints — sonic blueprints that capture the vibraphone’s articulate and resonant characteristics. \nThe vibraphone is renowned for its shimmering sustain\, dynamic control\, and ability to produce both melodic and percussive textures. In Cyanotype’s Five Studies\, these qualities are refracted through the instrumental language itself\, emphasising the vibraphone’s unique ability to articulate rhythmic patterns with clarity and tonal nuance. This work creates a rich sonic landscape for exploring how vibraphone rhythms can be abstracted\, deconstructed\, and re-imagined as imprints within sound. \nEach study acts as a sonic cyanotype\, distilling the essential rhythmic and timbral gestures of the vibraphone into textures that evoke the original instrument’s expressive potential without relying on straightforward replication. The vibraphone’s capacity for sustained tones and nuanced dynamic shading allows for a complex rendering of rhythmic articulation\, translating percussive strikes into lingering tonal shapes. The five studies function collectively as a blueprint series—each revealing different facets of the vibraphone’s character through a process of mediation\, exploring articulation\, rhythmic complexity\, timbral contrast\, and dynamic variation. \nBy conceptualising the work as an imprint rather than a direct transcription\, the piece invites listeners to reconsider the relationship between source and representation. It challenges traditional notions of musical interpretation by emphasising the transformative potential of the vibraphone to embody and reinterpret its own characteristic sound patterns. The blue-white dichotomy of the cyanotype process parallels the interplay between presence and absence in sound—notes articulated and decayed\, rhythm asserted and refracted\, the physical gesture and its sonic echo. \nUltimately\, Cyanotype’s Five Studies proposes a dialogue between visual and auditory art forms\, grounded in the shared concept of imprinting. Just as the cyanotype renders the visible object in reverse contrast\, this work explores how musical objects—rhythms and timbres—can be refracted through mediation to reveal new expressive dimensions. The vibraphone becomes both subject and medium\, transforming its distinctive voice into a series of articulate\, resonant imprints\, inviting a deeper engagement with the ephemeral nature of sound and the processes of artistic representation. \nAbout the artists\nPatti Cudd is an American percussionist\, educator\, and new-music advocate. A member of Zeitgeist and a professor at the University of Wisconsin–River Falls\, she specializes in contemporary percussion\, electroacoustic music\, and commissioning new works. Cudd has performed internationally\, recorded widely\, and collaborated with leading composers to expand the modern percussion repertoire. \nElainie Lillios is an American composer whose music explores sound\, space\, and the physical experience of listening. Her works often blend acoustic instruments with electronics\, field recordings\, and subtle timbral shifts. Lillios’s music has been performed internationally and is known for its immersive\, textural quality and imaginative use of resonance and sonic detail. \nMarc Ainger (sound design): Marc Ainger (USA) has developed an idiosyncratic body of work that embraces a wide range of music/sound and music/sound-making. He is interested in the relationships between the real and the imagined – the ways in which the visceral world of sound and sound production inform our imagined worlds of sound\, and the ways our imagined worlds\, in turn\, inform our concrete experiences. \n  \nKonstantinos Karathanasis: Gorgons’ Cry\nThis programmatic composition is inspired by the 12th Pythian Ode\, written by Ancient Greek poet\, Pindar\, in honor of a formidable Aulos player. When Perseus\, aided by goddess Athena\, beheaded sleeping Medusa\, the only mortal of the three sister Gorgons\, the two immortal Gorgon sisters\, Stheno and Euryali woke up\, realized the crime and chased the culprit with terrible cries and laments. Athena listened to the Gorgons’ cries and created Aulos\, a double pipe-double reed wind instrument to imitate them.\nIn contrast to the ancient poet\, and profoundly stirred by ongoing contemporary reports of femicides\, the composer interprets this myth from a feminist perspective. Medusa is portrayed as a tragic victim of patriarchy\, and the Gorgons cry out in extreme anger\, mourning the lost beauty of their sister.\nIn modern days\, Archeomusicologists study fragments\, or entire pieces of excavated Auloi from various sites and eras to recreate exact replicas to learn more about the sounds and performing techniques of this long-lost instrument. This piece is based on a Pydna aulos\, an instrument entombed in Macedonia\, Greece at about the 2nd half of the 4th century BCE. Melodic materials derive from the archaic Spondeion scale that was used to accompany certain religious processions.\nThe computer alters the aulos sound in real-time based entirely on custom combinations of variable delay and FFT algorithms\, without using any prerecorded materials. Gorgons’ Cry is the first composition in the modern repertory involving aulos and live electronics. \nAbout the artists\nKonstantinos Karathanasis as an electroacoustic composer draws inspiration from modern poetry\, artistic cinema\, abstract painting\, mysticism\, Greek mythology\, and the writings of Carl Jung. His compositions have been performed at numerous festivals and received awards in international competitions\, including Musica Nova\, SIME\, SEAMUS/ASCAP\, Música Viva and Bourges. Recordings of his music are released by SEAMUS\, ICMA\, Musica Nova\, Innova\, Equilibrium and HELMCA. Ravello Records released in March 2026 his solo album Resonant Mythologies with the support of the University of Oklahoma. Konstantinos holds a Ph.D. in Music Composition from the University at Buffalo. He serves as Professor of Composition & Music Technology at the University of Oklahoma. More info at: http://karathanasis.org \nCALLUM ARMSTRONG is an award winning multi-instrumentalist specialized in Early Music. For over a decade\, Callum has devoted a great deal of his time to the revival of ancient Greek and Roman auloi. He has a YouTube channel\, the ”The Aulos Collective” which is dedicated to how auloi were made\, played\, and used\, in collaboration with the luthier Max Brumberg. Callum regularly performs internationally as a soloist\, in various ensembles\, and works as a composer\, teacher and session musician for film and computer games. Recently Callum was the subject of the documentary ‘Callum Armstrong the Aulete’ which won 1st price from the Ierapetra international film festival. \n  \nZouning Liao: States of Water\, i. Prologue\nStates of Water is composed for fixed electronics and video by Zouning Anne Liao. Prologue\, the opening movement of this work invites the listener into a immersive and magnified world— one in which the familiar substance of water becomes both material and metaphor. While the piece is rooted in the observable states of water\, it approaches them in an abstract and imaginative way: not as literal depictions\, but as points of departure from which sound and image can drift\, distort\, and transform. States of Water was commissioned by Bowdoin College’s Center for Experimental Media Arts (CEMA). The video is designed specifically for CEMA’s state-of-the-art 180-degree curved projection screen\, with a resolution of 5000 × 1200. \nAbout the artist\nBorn in Guangdong\, China\, Zouning Liao is a composer and sound designer whose music reflects her fascination with nature\, malfunctioning machines\, distorted noises\, and the interplay between refined and raw timbres. Driven by a curiosity about the expressive potential of electronic circuits\, she is passionate about DIY electronics\, building her own sensor instruments to explore new sonic possibilities shaped by physical gestures. Her compositions have been showcased across the United States\, Europe\, and China. In 2025 her works have been featured in festivals and conferences such as Sonic Pavilion Festival\, IRCAM ManiFeste\, ICMC\, Digital Dialogue workshop at the IMPULS Academy\, and SPLICE Festival. Now based in Chicago\, Zouning is in her second year of a PhD program in Music Composition and Technology at Northwestern University. Outside of her academic work\, she can often be found in the woods capturing field recordings or at the post office\, collecting stamps. \n 
URL:http://icmc2026.ligeti-zentrum.de/event/lunch-concert-6a/
LOCATION:Hamburg University of Technology\, Building I\, Audimax 2\, Denickestraße 22\, Hamburg\, 21073\, Germany
CATEGORIES:16-05,Concert,Music
ORGANIZER;CN="ICMC HAMBURG 2026":MAILTO:info@icmc2026.ligeti-zentrum.de
END:VEVENT
END:VCALENDAR