Club Concert 3C
Concert 3C is an exploration of the boundaries of collective improvisation and creative technology. The SPIIC Ensemble of the HfMT Hamburg presents a program in which the audience has a say, algorithms extend historical works, and artificial intelligence reinterprets human movement as a “hallucination.”
In the industrial atmosphere of the Speicher am Kaufhauskanal, acoustic instruments merge with live coding, neural synthesis, and interactive notation.
Program Overview
Liquid tensioning
Fernando Egido
Sinophony for Clarence
Juan Arturo Parra Cancino
Chimerique
Jonathan Wilson
NEBULA
Enrique Tomás and Moisés Horta Valenzuela
plastique
Se-Lien Chuang and Andreas Weixler
Shamanic Protocol
Oscar Corpo
A Walk in Polygon Field
Rob Canning
DEPRECATED
Denis Polec Vocal
About the pieces & artists
Fernando Egido: Liquid tensioning
Liquid Tensioning is a work for violin and double clarinet, live notation, live generative system, live electronics, and attendees’ participation (category: Improvised work for ensemble and electronics (SPIIC+ Ensemble)). Liquid tensioning is a Collaborative and interactive work in which the work is real time created by the self-evaluation of the work. The attendees will evaluate the work via a web app, and the musical generative system will change according to the evaluation in real time. The Musicians will receive notes via a live notation system on their mobile phones. The title of the works refers to the model of tensioning provided by the generative system based on a musical tensioning that is not related to the properties of the musical material. This work belongs to a series of works in which the composer creates a self-referential musical generative system based on the real-time evaluation of the work. The main musical material of this work is its evaluation. The work duration is about 10 minutes.
About the artist
He studied composition with José Luis de Delás at the School of Music of the University of Alcalá de Henares and received musical training in workshops with composers, analysts, and interpreters around the LIEM or the GCAC. He studied Computer Music with Emiliano del Cerro.
He has published several papers at international conferences.
His works have been performed at festivals such as ICMC 2025-2024-2023, Bled international festival, SMC Conference Graz, Convergence Festival, Ars electronica Linz, Atemporánea Festival, AIMC 2022 conference, EVO 2021, OUA Electroacoustic Music Festival 2020, ISMIR 2020 in Montreal. The Seoul International Electroacoustic Music Festival 2019, the ACMC 2019 conference in Melbourne, SID 2015 conference in New York, Venice Vending Machine III, the New York City Electroacoustic Music Festival, JIEN in the Auditory 400, La hora acúsmatica, SMASH Festival, Encontres Festival in Palma of Majorca, and ACA.
Juan Arturo Parra Cancino: Sinophony for Clarence
Synophonie for Clarence is an ensemble and live electronics work inspired by the formal and sonic principles of Clarence Barlow’s Sinophony I (1970), his first electronic composition. Rather than functioning as an arrangement or transcription, this piece operates as an instrumental extension of Barlow’s electronic sound world, translating and reactivating its core materials through acoustic performance and real-time electronic processes.
The work seeks to bring into the physical space of performance elements that, in Sinophony I, exist only in fixed media: continuous tones, slow harmonic transformations, beating frequencies, and the perceptual tension between purity and instability. These characteristics are reimagined here as a living, performative situation, where instrumental sound and electronics merge into a single, evolving spectral body.
Synophonie for Clarence builds on methods developed by Juan Parra Cancino to extract performative salients from early electronic works—elements that can be embodied, negotiated, and reshaped by performers in real time. Through this approach, the piece revisits historical electronic material not as an object to be preserved unchanged, but as a dynamic field for exploration, experimentation, and renewed artistic engagement. The aim is not reconstruction, but continuation: to recover underlying processes and extend their implications into contemporary performance practice.
By situating acoustic instruments, live electronics, and spatialized sound within a shared listening ecology, the work foregrounds collective tuning, timbral fusion, and emergent beating phenomena as central musical forces. The ensemble functions less as a group of independent voices than as a composite oscillator, shaped by subtle interactions and shared attention.
This piece is conceived as a tribute to Clarence Barlow—composer, educator, and friend—honoring both his pioneering contributions to electronic music and his enduring influence on ways of thinking about sound, structure, and musical intelligence.
About the artist
Juan Parra Cancino studied Composition at the Catholic University of Chile and Sonology at the Royal Conservatoire The Hague, where he completed a Master’s degree in electronic music. He received a PhD from Leiden University in 2014 on performance practice in computer music. A guitarist trained in Robert Fripp’s Guitar Craft, he has worked extensively in live electronics. He is a researcher at the Orpheus Institute and Regional Director for Europe of the International Computer Music Association (2022–26).
Jonathan Wilson: Chimerique
“Chimerique” is about the interaction of music and language. Written and premiered in 2017, this composition is for an ensemble featuring improvisation, narration, and electronics. It was realized in a collaboration with poet and translator Patricia Hartland by incorporating her English translation of “Ravines of Early Morning” by Raphael Confiant into a musical setting. The title is taken from a word in this text. It is French for “chimerical,” and it can be defined as 1: something that takes delight in illusions, or 2: something that is utopian, or unreal. The narrator forms associations with this word through various phrases and passages that relate to the part of the story in which the description of “chimerique” is elaborated. Throughout this performance, the performers listen and react to the text spoken by the narrator (and electronics). They are accompanied by electronics that consist of fixed media and live electronics from two different patches in Max/MSP using additive synthesis and granular synthesis. The musical instruments are the source material for granular synthesis. The score for this composition uses hybrid musical notation with some traditional notation for pitch and some graphic notation that leads performers subsequently to interpret not only the spoken phrases, but also the graphic notation in their parts to determine volume, pitch, rhythm, articulation, and contour, thereby making improvisation a necessity. The narrator and performers work together to generate a spontaneously formed through-composed work that marries text and music. The form can be described as through-composed in six sections. In the first section the performers respond only to a single phrase. In sections 2-6 the performers respond not only to phrases that delineate each section but also respond to extended narration shifting from descriptions of dreams, the night, madness, illusions, and at the end the act of dreaming itself.
About the artist
Dr. Jonathan Wilson’s works have been performed at the Ann Arbor Film Festival, European Media Art Festival, ICMC, SICMF, SEAMUS, NYCEMF, MUSELAB, NSEME, Napoleon Electronic Music Festival, Iowa Music Teachers Association State Conference, and Midwest Composers Symposium. He is the winner of the 2014 Iowa Music Teachers Association Composition Competition. Jonathan has studied composition with Lawrence Fritts, Josh Levine, David Gompper, James Romig, James Caldwell, Paul Paccione, and John Cooper. In addition, studies in conducting have been taken under Richard Hughey and Mike Fansler. Jonathan is a member of Society of Composers, Inc., SEAMUS, ICMA, and the Iowa Composers Forum.
Enrique Tomás and Moisés Horta Valenzuela: NEBULA
Artists working with deep-learning audio models often find that exploring their high-dimensional latent spaces requires chance-based, combinatorial, or technically complex machine-learning techniques. While these approaches can reveal unexpected possibilities, they also make it more difficult to deliberately guide the models toward outcomes that are musically meaningful or aligned with specific creative intentions.
In this improvisation for solo instrument and two performers on live electronics, we present an alternative approach to create a more interpretable and musically guided latent space exploration. This approach leverages Principal Component Analysis (PCA) applied to pre-encoded RAVE (Realtime Audio Variational Autoencoder) representations to reorganize the latent data into clusters that can be navigated more deliberately in performance. PCA reorganizes the encoded data into clusters based on shared timbral characteristics, producing data clouds directly connected to the sonic properties of the source material. By structuring access to the latent space in this way, our method bridges the gap between open-ended exploration and purposeful control, offering performers a clearer and more intuitive means of shaping sound.
To prepare the improvisation, and prior to the concert, the solo instrumentalist provides an eight-minute recording that defines the sonic domain of the performance. This recording is encoded and analyzed, restricting exploration to regions of the latent space shaped by the performer’s own material and giving the electronic musicians a more focused and musically coherent landscape to navigate. During the live performance, the solo instrumentalist and the two electronic performers interact within this PCA-organized timbral map. Their trajectories through the latent space—along with the evolving clusters and sonic transformations—are projected in real time, allowing the audience to see how latent-space navigation corresponds to audible change.
The musical materials resulting from this setup combine structured instrumental improvisation with electronically generated textures derived from latent-space navigation. While the overall form is left to real-time decisions between the soloist and the live performers, the resulting sound world often alternates between rhythmically driven motifs—loosely recalling the interactive dynamics of small jazz ensembles—and more abstract electronic layers shaped through PCA-guided trajectories. These electronic textures, produced by traversing clustered regions of the latent space, serve as harmonically and timbrally evolving fields against which the soloist can articulate phrasing, gesture, and dynamic contour. The custom-built performance interfaces allow the electronic performers to shape these materials with precision, enabling a responsive interplay in which acoustic action and machine-learned transformations continually inform one another.
About the artists
Enrique Tomás (*1981) is a sound artist, researcher and assistant professor at the Tangible Music Lab who dedicates his time to finding new ways of expression and play with sound, art and technology. His work explores the intersection between sound art, computer music, locative media and human-machine interaction.
As an individual artist, Tomás’ activity is centered around ultranoise.es and focuses on performances and installations with extreme and immersive sounds and environments. He has exhibited and performed in spaces of Ars Electronica, Sonar, CTM, IRCAM, IEM, KUMU, SMAK, NOVARS, STEIM, Steirischer Herbst, Alte Schmiede, etc., and in galleries and institutions throughout Europe and Latin America.
Moisés Horta Valenzuela is a self-taught sound artist, technologist, musician, and researcher from Tijuana, Mexico, based in Berlin. His work spans computer music, neural audio synthesis, conversational AI, and the politics of emerging technologies, approached through a critical lens that connects ancestral knowledge with contemporary digital culture. He has presented work internationally at Ars Electronica, NeurIPS ML for Creativity & Design, MUTEK México, MUTEK AI Art Lab Montréal, Transart Festival, CTM Festival, Elektron Musik Studion, and the Sound and Music Computing Conference, among others.
Se-Lien Chuang and Andreas Weixler: plastique
interactive audiovisual comprovisation for e-quitar, green leaves & i-hands – GLISS – Green Leaves Imaginary Scenic Score
Duration: ca. 8 min
About the artists
Andreas Weixler, born 1963 in Graz, Austria, is a composer for computer music with an emphasis in
intermedia realtime processing. He is teaching at the mdw Vienna, InterfaceCulture in Linz and serves associate university professor at the CMS – computer music studio of Anton Bruckner
University in Linz where he initiated the intermedia concert hall the Sonic Lab.
Studies of contemporary composition at KUG in Graz, Austria with diploma by
Beat Furrer, completed by international projects and residencies.
Se-Lien Chuang is a composer born in Taiwan in 1965 and based in Austria since 1991. Her work focuses on contemporary instrumental composition and improvisation, computer music, and audiovisual interactivity. She has presented works and lectures internationally in Europe, Asia, and the Americas at events such as ICMC, ISEA, and NIME. From 2016 to 2019, she taught for the Computer Music Studio at Bruckner University Linz. Since 1996, she has co-run Atelier Avant Austria, specializing in audiovisual interactive systems, real-time processing and computer music.
Oscar Corpo: Shamanic Protocol
Shamanic Protocol is an online sound ritual performed by a partially damaged virtual entity. Its memory is an incomplete and corrupted archive, composed of residual sonic materials related to shamanic rituals, music therapy, sound-based healing practices, and data derived from musical epigenetics. Reshaped by the available data and the presence of connected users, these fragments are reprocessed and reorganised each time the system is accessed, generating a sonic ritual that follows a recognisable structure yet never manifests in the same way twice.
The sound ritual has no declared purpose: it remains unclear whether the entity performs the rite as an attempt to repair itself, an act of archive restoration, a process meant to affect human listeners, or simply because this process constitutes its way of operating. The variability of the outcome may suggest either a gradual recovery or a progressive deterioration of the system. The resulting sonic output exists in a space between therapeutic effect, system malfunction, and autonomous algorithmic process. The shifts between fragile calm, overload, interruption, and recovery reveal the instability of the system that generates it. No clear boundary is drawn between healing, malfunction, or expression: these states coexist and remain indistinguishable within the process.
The rite can be experienced as a purely electronic process, or human performers, in any instrumental or vocal configuration, may take part in its enactment. Musicians are invited to participate in the ritual rather than interpret a fixed musical text. Guided by an open, interpretative score, performers do not execute predefined material but engage in the ritual itself, interacting with the electronic layer by listening, responding, and aligning their gestures with the evolving sonic environment. The notation offers indications of behaviour, density, register, and gesture rather than prescribed material; in this way, performers take part in the rite by freely amplifying, refracting, and destabilising the entity’s activity. The score prescribes no precise instrumentation or techniques; in this instance, the ritual is performed with a string ensemble alongside soprano saxophone, bass clarinet, piano, and percussion.
Performers do not guide the system, nor do they follow it; instead, they remain in a state of attentive coexistence with its unfolding behaviour. Each performance is therefore situated, shaped by specific conditions, configurations, and presences.
The process does not call for interpretation: repair and damage are no longer separable; function and meaning no longer distinguishable.
About the artist
Oscar Corpo (born 8 April 1997, Naples, Italy) is an Italian composer based in Hamburg. He studied Composition and Multimedia Composition in Naples, and is now a PhD candidate at the HfMT Hamburg, focusing on AI and collective improvisation with Ensemble 404. His work spans electronic, instrumental, vocal, improvisation, and music theatre. He has collaborated with Alexander Schubert, Berliner Philharmoniker, La Biennale di Venezia, and Lux Nova Duo, among others.
Rob Canning: A Walk in Polygon Field
A Walk in Polygon Field is a graphic score environment for controlled improvisation, composed for 1–4 instrumentalists with electronics and surround diffusion. Three polygons—pentagon, hexagon, heptagon—rotate at different rates, producing polymetric phase relationships (5-against-6-against-7). Performers activate objects orbiting these shapes, interpreting compound visual motion as sonic material. An outer ring generates OSC data driving spatial processing.
The score defines states, behaviours, and constraints; performers negotiate what these structures sound like. Each polygon side represents a discrete performance state—pitch region, articulation, texture—but specific mappings remain open. Musicians enter and withdraw from a shared texture whose density and pacing emerge from collective decision-making.
Authored entirely in SVG, the work embeds performance semantics directly into visual element identifiers, executed by a browser-based runtime on networked tablets. This approach, detailed in the accompanying paper “Scores That Run: Graphic Notation with Embedded Performance Semantics,” demonstrates how open web standards support animated notation without specialised infrastructure. Each performance traces a different route—music negotiated through shared encounter with a moving score.
Full Guide to Interpretation, Programme Notes and supporting materials including Supercollider live electronics patch are available online:
https://robcanning.github.io/oscilla/compositions/polygonfield2026/
About the artist
Rob Canning (Dublin, 1974) is a composer, improviser, and creative technologist whose work explores animated notation, improvisation, and the dynamics of networked musical systems. He holds a PhD in composition from Goldsmiths, University of London, where his research examined distributed authorship in computer-assisted music. A long-time advocate of Free and Open Source Software, he develops Oscilla, an open-source platform for animated graphic notation and networked performance.
Denis Polec: DEPRECATED
DEPRECATED establishes a recursive feedback loop between a biological subject and a cluster of interpretative algorithms. The work investigates the friction between human indeterminacy and machine determinism.
The Setup A lone performer occupies the center of the stage, stripped of traditional instrumentation. Facing them is a “panopticon” of sensors: computer vision cameras and open microphones. The human subject oscillates between legible behavior and “abnormal” states—engaging in erratic gestures, non-semantic vocalizations, and visceral spasms designed to evade learned pattern recognition.
The Process Simultaneously, three isolated AI instances dissect this input in real-time. Unable to process the chaotic reality of the “Now,” the systems hallucinate: Computer Vision misinterprets trauma as choreography; a Large Language Model forces these errors into a coherent narrative; and Neural Audio Synthesis re-synthesizes the fabrication into sterilized perfection.
About the artist
Denis Polec operates at the intersection of sound art and algorithmic criticism. His practice rejects the notion of human-machine collaboration, focusing instead on the friction, latency, and inherent violence of predictive systems. Polec constructs adversarial performance systems that expose the limitations of neural networks when confronted with the chaotic reality of the biological body.
