Creative Machines: Technology and Collaborative Practice in Contemporary Music
Biographies & Abstracts, in order of presentation
Nina Whiteman
Royal Holloway, University of London
Nina Whiteman is a composer, multimedia artist and vocalist based in Manchester, UK. In 2022-24, projects have focussed on technology's interaction with the natural environment and ecological concerns, leading to a cycle of multimedia works exploring AI and machine learning titled The Cybird Trilogy (supported by the Cyborg Soloists project at Royal Holloway, and by PRiSM, RNCM). During 2023, Nina has designed and led the interdisciplinary collaborative project BELOW GROUND, developing multimedia multisensory responses to notions of the subterranean (funded by Arts Council England). Nina lectures at the RNCM and is Reader in Composition at Royal Holloway, University of London.
Sonic gardening: Community workshops as a testbed for performative technologies
BELOW GROUND is an interdisciplinary collaborative project rooted in ideas of the subterranean. In 2023, ‘Sonic gardening’ workshops in the community developed uses of music technology relevant to the underground theme. In a current context where research impact is high on the agenda, and Arts Council England are focussing on ‘cultural communities’ and ‘creative people’ as core outcomes, creative researchers are increasingly called upon to involve local people meaningfully. We asked how work in communities can become part of the creative research process, rather than a place where outputs from a process are simply shared. We considered which technologies are most appropriate for the setting, and how our findings might inform future multimedia performances. This paper investigates how insights concerning the technologies’ affordances were unearthed through playful interactions in a workshop setting, and in turn how this new knowledge led to inventive chamber music interactions in Earthed (2023-24; for piano and multimedia). Eventually focussing on the Playtron MIDI device and a DIY electronics circuit using soil as a variable resistor, the potential for these touch-based technologies was investigated experimentally by participants of all ages alongside artists David Birchall, Jackie Haynes and Nina Whiteman.
Ned Barker, Joana Burd, Nikolas Gomes and Jambu
University College London, Universitat de Barcelona, independent artist-researcher
Ned Barker is a sociologist of technology and the body based at the UCL Knowledge Lab. He is currently PI of ‘Biohybrid Bodies: a sociological framework for living with Living Machines’, funded by the Leverhulme Trust. As an experienced ethnographer Ned combines sensory, creative, and collaborative methods to explore the complex relations between body, technology, and society.
Joana Burd is an artist, educator and researcher of haptic aesthetics and sound art. She is also currently writing up her PhD research at the Universitat de Barcelona, where she also works as a lecturer of Sound Art.
Nikolas Gomes is a sound artist and musician based in Lisbon. His work deals mostly with the intersection of musical production and sound art, looking to push the boundaries between these two fields.
Jambu is a Brazillian flower. In English-speaking countries these flowers are also known as electric daisies or buzz buttons because upon consumption, they cause the mouth to vibrate. Jambu, can be extracted into cachaça, and has a strong presence in Brazilian culture and nightlife. Jamburana is a popular song about the plants renowned “tremor”.
Synthesizing vitalities, sonifying secrets
Inspired by new hybrid forms appearing on the horizons a sociologist and an artist started an undefined collaboration. Robots made with biological tissues grown in petri dishes. Neural links reading and adapting our feelings and thoughts. Hybridity changing life as we know it. The inquisitive pair began blending their imaginations, making fictional biohybrid entities while playing with the senses hoping to unsettle perceived boundaries between non/living matter. They became increasingly curious with vibration, and its vitality, as their multi-sensory art series Living Capsules evolved.
Biohybrid Buzz was born. In this talk we reflect on how the Jambu flower, an Amazonian plant known for vibrating the mouth, became co-composer of a deep bass sound piece bringing life to this performance. We invited the Jambu to reveal secrets about itself, its vibrancy, and vitality: what do your cells look like? what chemicals are you made from? With help from chemical engineers and microscopy scientists these little ‘Buzz Buttons’ laid their code bare. We sonified these data, these life secrets, with sound artist Nikolas Gomes who developed an open-sourced software tool for data sonification. The more-than-human performance vibrated audiences’ mouths and bodies in tandem, meanwhile an audio-visual narrative introduced the fictional biohybrid robot.
Leon Michener
No institutional affiliation
Leon Michener is a pianist and composer whose work primarily seeks to find new methods of interaction between piano performance and electronic processing. He earned his degree from Trinity College of Music and recently a PhD from Surrey University. His debut recording for the FMR record label juxtaposed modern 20th-century classical repertoire with corresponding extemporisations, and worked with the London Improvisers Orchestra, recording an improvised piano concerto. As Klavikon, he has played worldwide, performing live electronic music made without the sequencer, all sounds being generated by an amplified prepared piano. An album chronicling this work was released on the Nonclassical record label.
Towards the LED: Didactic feedback loops as a generative musical function in bio-electronic collaborations
Building on the work of David Tudor, Noah Creshevsky, and Detroit dance music and referencing contemporary practitioners such as Deantoni Parks and digital artist Patricia Taxxon, this paper examines the concept of collaborating with machines. It is suggested a distinction can be made between interaction, and a true collaborative process involving shared commonalities. A method is described that attempts such a collaboration utilising improvisation, instrumental technique, and electronic processing, illustrated by the author’s adoption of machine-made musical ideas through the deconstruction of jazz standards and the creation of hybrid bio-digital techno on the amplified prepared piano and clavichord. By practising certain techniques and exercises, mechanical traits are absorbed, allowing the computer to be approached on its own terms. Furthermore, by relearning and performing processed material, a didactic feedback loop is generated with the computer which exists both at the conceptual/mental level as well as the bio-mechanical. The results suggest that engaging the inhuman at both a physical and aesthetic mental level enhances musical expression, allowing a deep level of interaction with electronic mediums. This challenges many commonly held notions of musical expression, and its value, particularly among instrumentalists and traditional classical musicianship, impacting not only music education but notions of humanness and identity.
Tonia Ko and Adam Ganz
Royal Holloway, University of London
Composer Tonia Ko was born in Hong Kong and raised in Honolulu, Hawai’i. She earned a DMA from Cornell University and is currently Lecturer in Composition at Royal Holloway, University of London. Her creative practice follows aural, visual, and tactile instincts in a holistic way, writing for leading ensembles and soloists with and without electronics. Recipient of a 2018 Guggenheim Fellowship, she has been commissioned by institutions such as Carnegie Hall, Koussevitzky Foundation, and Chamber Music America. As a free improviser on air packaging, she has performed at Cafe OTO, Hundred Years Gallery, and the Chicago Ear Taxi Festival.
Adam Ganz is Professor of Screenwriting at Royal Holloway, University of London He writes for radio film and television, alongside his work on Felix’s Room. He is Co-Investigator at the CoSTAR National Lab which brings together world leaders in applied technology research and story. He was previously Head of Writers Room at StoryFutures. His research interests focus on audiovisual narrative, worldbuilding and performance. He co-authored (with Steven Price) the monograph Robert De Niro At Work- From Screenplay to Screen Performance Palgrave 20200. His most recent screenplay is This Blessed Plot (2023) directed by Marc Isaacs.
Felix’s Room
Felix’s Room is a musical theatre piece written by Adam Ganz and co-directed by him with ScanLAB Projects which premiered at the Berliner Ensemble in collaboration with the Komische Oper Berlin in June 2023. The piece used ScanLAB’s pioneering Lidar scanning technologies to recreate the room in the Kaiserstrasse in Mainz where Felix Ganz, Adam’s great-grandfather, lived with his wife Erna before their deportation. The spatial storytelling centred around this projected space, rendered from traces of their lives in sketches and letters, which was three-dimensional, mutable, interactive... but virtual. The only real object on the stage was a Baroque chest of drawers which has survived from the original room – now in the Landesmuseum in Mainz and was brought to Berlin for the production. Tonia Ko’s original music for soprano, chamber ensemble, and this amplified chest of drawers was informed by the hybridity inherent in ScanLAB’s approach and relied on close collaboration with a co-composer and sound designer to explore notions of presence and absence in the space – and the possibilities of music in remembering and forgetting. This approach to composition reinterprets the found object as creative machine and offers a unique perspective on performer interaction with the sound-making object.
An-Ting and Ian Gallagher
Independent artist-researchers
An-Ting and Ian Gallagher have a history of co-creating cutting edge arts and technology works. Their projects have garnered recognition, including the Arts Council England's Digital Culture Award for Storytelling (every dollar is a soldier/with money you’re a dragon). HOME X was premiered in London Barbican, co-commissioned by York Theatre Royal Cambridge Junction, Oxford Contemporary Music, The Space, StoryFutures and was nominated for UK Theatre Awards - Digital Innovation.
An-Ting is a musician, composer, Artistic Director/CEO for an NPO, Kakilang (2018 to 2023) and holds a PhD in performance practice from the Royal Academy of Music. Her works have been presented/commissioned by renowned venues, including Barbican, Southbank Centre, LSO, National Theatre Taiwan.
Since completing his PhD in physics at Manchester University, Ian has been involved in many app projects and startups. More recently he has had great success in bringing digital technology to live performance, online and in the Barbican.
Bridging two worlds: Collaboration between live music performance and technology
An-Ting, a musician/composer/director, and Ian Gallagher, a creative technologist, collaborate to create cutting-edge new performances challenging how technology can bring a new expression to live performances. In this presentation, they demonstrate the innovative approaches used in their collaboration: HOME X at Barbican (2023), a simultaneous in-person and online performance where performers in Hong Kong and UK were captured in 3D and streamed live into a 3D world filled with an online gaming audience and projected on stage for the Barbican audience to experience. Ian developed technology that enabled the performers to see and interact with each other in the 3D world with low latency and coded the gaming performance; while An-Ting composed 3D interactive music and duo for soprano and live electronic music playing together over a distance of over 5000 miles and making the inevitable latency a part of the composition. After HOME X, they are currently developing and performing their new project, Lost Communications, where they perform new music based on recorded birdsongs in UK, Mongolia, China, Hong Kong and Taiwan with live AI visuals attempting to communicate some of the mind bending experiences they have had while exploring nature’s communications in the Peruvian jungle. When AI learns about the world it is building up an internal model of it that must somehow reflect reality. Pushing and probing the poor thing out of its normal range of parameters might tell us something about reality we didn’t already realise.
Amble Skuse and Bosko Begović
University of Plymouth, independent artist-researcher
Amble Skuse is a composer and sound artist who uses disability theory, body sensors, spoken word interviews and electronics to create unique sound works. She is interested in the interface between the disabled body and the exterior world. Amble recently won a Special Commendation Daphne Oram Award, and was selected as Scotland’s representative for the International Society of Contemporary Music Festival 2024.
Bosko Begović is a conceptual artist. As an expert in martial arts and movement, and he works as a performance artist and choreographer. His Methodology uses theory inspired by the philosophy of Slavoj Žižek, and his reading of Hegel through the theoretical tool of Jacques Lacan.
Interdependent intersections: Generative music using body sensors on a movement artist
Our paper details our practice of creating generative soundscapes using body sensors on movement artist Bosko Begovic. Amble Skuse creates a soundworld using an Ableton patch and connects various processing parameters to the outputs from the body sensors. As Bosko moves in the space he explores a physical world which has invisible audio within it. In this way Bosko’s body becomes capable of composition through movement. The use of body sensors connects our practices and we are interdependent in the creation of the work. The sound begins to influence Bosko’s movement as he explores the different parameters of sound. Our two practices are completely changed through the process of collaboration. By using these body sensors we unlock a new dynamic of interrelation, interdependence and a unique form of creativity. The work sets a new vision of how disability works through technology to a new normative. As we move towards an ever more integrated technological humanity, we explore how this affects bodies which are non-normative. We work with movement and body sensors, sound and video art to explore how we connect and interdepend on each other and the technology around us. Is technology a humanising assistance or a way of separating and isolating us? When does personal freedom become isolation?
Megan Steinberg
Royal Northern College of Music
Megan Steinberg is an experimental composer and abstract turntablist. She is a PhD student at Royal Northern College of Music, where she is the Lucy Hale Doctoral Composer in Association with Drake Music. Her research is focused on the creation of works for Disabled musicians, new instruments and AI. Megan has composed for performers including Riot Ensemble, Kathryn Williams, Heather Roche, Juice Vocal Ensemble, Distractfold, Apartment House and Loré Lixenberg. In 2016, she was awarded the FI Williams Prize for Composition. In 2017, she was Composer in Residence at the Royal Holloway Picture Gallery. In 2022, she was an Artist in Residence at Huddersfield Contemporary Music Festival. In 2023, she was Composer in Residence with CoMA Ensemble and Composer Fellow at NEO Voice Festival, LA. In 2024, she was listed by the BRIT Awards as a trailblazer in music, inclusion and accessibility.
Ableism in Artificial Intelligence: By human design
How are we designing AI datasets and are they representative of the global population? When put into practice, neural network-based systems contain biases that put a magnifying glass up to our very human prejudices. As we begin to rely more and more on these technologies, are disabled, d/Deaf, neurodivergent people and people with long-term health conditions being discriminated against or even put in harm’s way? Ableism exists in many different types of Artificial Intelligence systems in use today, and despite the potential for AI to support the lives and work of disabled, d/Deaf and neurodivergent people, we must consider different models of disability and whether these technologies seek not to support but ‘fix’ or ‘normalise’ people. And, finally, how can art and music play a part in the development of fair and ethical AI systems?
Edmund Hunt
Royal Birmingham Conservatoire
Edmund Hunt is a composer of vocal, instrumental and electroacoustic work. As an undergraduate, he studied early medieval languages, literature and philology, and much of his music explores early poetry. From 2020–2023, he was lead composer and co-investigator on Augmented Vocality: Recomposing the Sounds of Early Irish and Old Norse, an AHRC-funded research project. Edmund’s music has been broadcast on BBC Radio 3, and performed by groups including the London Philharmonic Orchestra, BCMG, Das Neue Ensemble, and Hard Rain Soloist Ensemble. Since 2018, he has been based at Royal Birmingham Conservatoire, where he is a researcher and lecturer.
Practice-based collaborative research and the digital humanities
Technology can play a valuable role in facilitating collaboration between practice-based artistic research and the digital humanities. Online databases, archives and datasets sometimes provide source materials for artistic creation via technology-driven processes such as sonification. However, more imaginative artistic outputs can be achieved when cross-disciplinary collaboration is designed as an integral strand of a research process, rather than something that occurs only at the end of a project. When technology provides a point of contact or shared language for researchers from different fields, it can become more feasible to develop collaborative methodologies for practice-based research. This paper draws on the experience of technologically-mediated collaboration during Augmented Vocality (2020–23), a cross-disciplinary, practice-based investigation involving researchers from Royal Birmingham Conservatoire and the University of Cambridge. Following the creation of a digital audio archive of texts, words and phonemes, subsequent research outputs included tools for vocal processing and musical composition, and new music for singers, ensemble and live electronics. Although Augmented Vocality was primarily a music project, the combination of technology and cross-disciplinary research provided useful starting-points regarding the development of practice-based methodologies. Drawing on the collaborative approaches of Augmented Vocality, this paper considers how similar ideas might provide a model for practice-based artistic research projects involving new technology and the digital humanities.
Martin Suckling
University of York
Martin Suckling is a composer and violinist; he is also Deputy Head of the School of Arts and Creative Technologies and Head of Music at the University of York. His music has been championed by many leading orchestras and ensembles including the Aurora Orchestra, Deutsches Symphonie-Orchester Berlin, BBC SSO, and Scottish Chamber Orchestra with whom he was composer-in-association from 2014-2018. Among numerous awards for his music, these bones, this flesh, this skin, Suckling’s interactive web-based collaboration with Scottish Ensemble and Scottish Dance Theatre received both a Scottish Award for New Music and the Classical:NEXT Innovation Award in 2021.
Choice, collaboration and agency in Black Fell, a game-for-music
While music-for-games is a significant area of both academic and industry attention, the use of games-for-music is less widely explored. What happens when music leads, when it is released from the demands and conventions tied to a primarily visual experience? The support for branching structures and spatial audio that are built into game engines offer new ways of writing and encountering sound, where music can be composed but still malleable, where it can respond, nudge, mould itself to a player’s actions, and where the listener can be simultaneously protagonist, performer and composer. Black Fell (Suckling et al., 2023, https://black-fell.com), a game-for-music, seeks to explore these possibilities and to capitalise on the affordances of the nascent online ‘venue’ for music by using the Unity WebGL framework as the platform for an ‘interactive digital opera’. This paper will outline some of the compositional challenges encountered in the Black Fell project, propose ‘thickening’ as an additional dimension alongside the familiar ‘branching’ of interactive fiction, and consider the agency of the listener-player as collaborative partner in the production of each unique iteration of the musical output of Black Fell.
Jenn Kirby
Goldsmiths, University of London
Jenn Kirby is a composer, performer, and music technologist. She works with instrumental composition, electroacoustic music, experimental electronic music, and regularly collaborates with musicians and movement artists. Jenn makes hybrid musical instruments, using software, sensors and re-purposed controllers. Her performance work often engages with physicality, texture, and embodiment through postdigital practices. Jenn is a lecturer in electronic music and technology at Goldsmiths, University of London.
Spiral: Two-player performance system
Spiral is a performance system made for a collaboration between a dancer and electronic music performer. This performance system builds upon my work with gametrak controllers, modifying it further to add motion sensing to the base of a wireless gametrak. Spiral was iteratively developed through workshops with dancer Isabella Oberländer, for a commission from Light Moves Festival in 2023. The iterative development process allowed the agency of the components and the movement of the performers to prompt means of interaction and expressive functionality. Adding motion-sensing to the base of the gametrak, made it more mobile, tangible, and due to entanglement, necessitated a change of engagement from and between the performers and within the performance environment. This mobility extends the scale of movement available allowing for both micro and macro movement. This physical flexibility led to an extended sonic expression by extending the dynamics, intensity and range. Following the completion of the collaboration, I am keen to understand how I iterate on the system and retain the past presence of the dancer as a component of the system.
Keynote: Dr Luke Nickel
Independent artist-researcher
Luke Nickel (b. 1988) is an award-winning Canadian interdisciplinary artist, composer, and researcher currently living in Berlin, Germany. His works knot together themes of memory, transcription, translation, queer identity, technology, and impossible roller coasters. In addition to orally-transmitted music compositions, he creates traditionally-notated musical works, audiovisual performances, installations, videos, and texts. He has created work with internationally-established soloists and chamber ensembles such as Mira Benjamin, Zubin Kanga, Quatuor Bozzini, and EXAUDI. He has created work with visual and multimedia artists such as Freya Olafson and Beth Frey. About his work, Jennie Gottschalk writes: “...there is an unusual quality of rawness. The players are participating in an oral, folkloric tradition without any sense of irony or flippancy” (Experimental Music Since 1970).
Honours include five SOCAN Awards for Young Composers (including first prize for his workKyriein the Godrey Rideout category), first prize in the Canadian Music Centre Prairie Region’s Emerging Composer Competition, and a shortlist for the Canadian League of Composers’s ICSM Canadian Selection. He was one of Sound and Music’s New Voicesin 2015, and subsequently holds a place in the British Music Collection. In addition, he has received support from organizations such as the Social Sciences and Humanities Research Council of Canada, the Canada Council for the Arts, the Manitoba Arts Council and Sound and Music.
Transmission, loss, and coalescence in collaborations with humans, technologies, and intelligences: A discussion of some recent works and ideas in my practice 2014-2024
From the salient features found in musical scores to the data from simulated roller coasters to the latent knowledge spaces of artificial intelligence models, over the past 10 years I have experimented with transforming information by transmitting and receiving it. One theme that has arisen through these explorations is the idea of productive lossiness. When loss inevitably occurs in this transmission-reception chain, complex data can reveal its most salient features and coalesce into a more robust form. I have worked to cultivate lossiness rather than avoid it, seeing it as a productive artistic tool that can be used to create surprising results. In this presentation, I will outline numerous recent projects that feature collaborations with many different ‘receivers’, and discuss the challenges, features, and successes surrounding working with diverse agents across human and technological realms in the pursuit of creative expression and knowledge dissemination.
Questions?
Please direct any questions to jonathan.packham@rhul.ac.uk and caitlin.rowley@rhul.ac.uk