Results of Artistic Mini-Projects

We are pleased to announce the selected recipients for our open-call commissioning three innovative artistic mini-projects! These projects aim to explore and address the bias inherent in mainstream AI models by creating music in genres often overlooked or marginalised by such systems. Congratulations to our winners, and thanks to all that applied. We received an overwhelming number of responses of high quality. We look forward to seeing how the awardees' work addresses the themes of Responsible AI (RAI) using low-resource models and small datasets.

Fá Maria

Fá Maria aka HAUT is an artist and composer based between Berlin and London. Drawing from their experience as a former psychiatrist, Fá Maria works at the intersection of sound, the human body, and technology. Their work spans live performances, immersive installations, and also music scores for dance pieces and movies. Recent exhibitions and live performances include KCUA Gallery, Kyoto (2024), Batalha Cinema, Porto (2024), TEA Tenerife (2023), Foundation Calouste Gulbenkian (2022), music for Vampires in Space - Venice Biennial (2022), Liverpool Biennial (2021), HAU - Hebbel am Ufer Berlin (2020). They are also a PhD candidate in Computational Arts at Goldsmiths College University of London, where they research the relationship between humans and AI technologies and its impact in creativity.

Project Description

“Erasure” is a sound installation that explores the complex relationship between voice, gender identity, and AI technology. It focuses on creating a speculative musical composition using vocal datasets from underrepresented queer and trans voices to address the bias embedded in mainstream voice-generative AI models. The artistic idea behind the project is to create artistic work that amplifies silenced voices and genres while exploring the limitations and biases that currently restrict diversity in AI-generated music and voices. Our voice plays a crucial role in both physical and symbolic aspects of identity and political representation. For the queer and trans community, the voice is an important vehicle of self-expression but also a source of marginalisation. AI technologies can particularly amplify gender-specific, racist, classist, and other biases. Unique vocal expressions are generally overlooked or misinterpreted by AI models often programmed with voices shaped by cultural assumptions about "masculinity" and "femininity" leaving aside the complexity and richness of queer and trans voices. Without the contributions of queer, non-binary, and trans people, AI systems will not be able to do justice to the full spectrum of human experiences but will instead perpetuate social exclusion mechanisms in the long term.


Junson Park

Junson Park is a Korean artist who explores the boundaries between experimental electronic music and digital media art, focusing on the organic connections between various forms of expression. He majored in Electronic Production and Design at Berklee College of Music and is now studying for an MFA in Music Technology at CalArts. Based on his experimental music production background, he creates audiovisuals characterized by avant-garde soundscapes and cutting-edge technology to incorporate dynamic and interactive features of digital art elements into his performances and create fully immersive experiences for his audience.

Project Description

This project seeks to overcome the limitations often seen in mainstream AI audio models by creating a custom approach to representing misrepresented cultural nuances, specifically within traditional Korean music. By developing an impression-based sound arrangement that combines traditional Korean sound timbres with experimental, modern electronic influences, this project aims to create an auditory experience that resonates with authenticity and innovation. This project is not merely focused on rhythm but explores Korean music's unique textures and timbres, including its distinctive odd-numbered meters.


Shreya Gupta

Shreya Gupta is an interdisciplinary music artist, blending Indian classical and electronic music to craft immersive, rhythmically complex soundscapes. Her work explores intricate polyrhythms and harmonic shifts, aiming to push the boundaries of traditional and contemporary genres.

Project Description

Jugalbandi: Call and Response Between Tabla Player and Drum Player

My approach involves developing a rule-based method to deconstruct complex rhythmic structures into components that a computer can understand. Once the rhythmic structure and feel are analyzed, the groove can ideally be played by any percussive instrument. The intent is to make AI models more inclusive and culturally aware. The project could also influence future developments in AI-driven composition tools, offering greater flexibility for artists working outside the Western musical realm.


Call for Artistic Mini-Projects

We are commissioning 3 speculative artistic mini-projects to use AI to create music with genres that are currently marginalised by main-stream AI models. The aim of these mini-projects is to create impact and interest in Responsible AI (RAI) concerns of bias in AI models. These mini-projects will use AI tools such as low-resource AI models with small datasets and will be supported by the project team or industry partners where needed. The mini-projects will showcase the challenges of bias in AI and how RAI techniques can be used to address them.

More information including a detailed list of requirements and a link to the application form can be found on this page.

About the MusicRAI Research Project

This 12 month project "Responsible AI international community to reduce bias in AI music generation and analysis" will build an international community to address Responsible AI (RAI) challenges of bias in AI music generation and analysis.

The aim of the project is to explore ways to tackle current over-reliance on huge training datasets for deep learning leads to AI models biased towards Western classical and pop music and marginalises other music genres. We will bring together an international and interdisciplinary team of researchers, musicians, and industry experts to make available AI tools, expertise, and datasets which improve access to marginalised music genres. This will directly benefit musicians and audiences engaging with a wider range of musical genres and benefits creative industries by offering new forms of music consumption.

Ethical and Responsible AI Music Making Workshop 2024

We held a one-day workshop on Responsible Music AI with a focus on bias in AI music generation systems on 17th July 2024 at the Creative Computing Institute, University of the Arts London, Holborn, London.

We brought together over 100 people to form an interdisciplinary community of musicians, academics, and stakeholders to collaboratively identify the potential and challenges for using low-resource models and small datasets in musical practice. The workshop consisted of publicly streamed discussion panels, presentations of participants’ work, and brainstorming sessions on the future of AI and marginalised music. The event was followed by an evening reception featuring live performances using AI and small datasets of music.

In the morning sessions we focussed on sharing and identifying current practices and challenges for AI music making with small datasets. The afternoon was dedicated to exploring opportunities and practical solutions to using small and marginalised datasets of music and other audio with AI. This forms the start of an international network and roadmap for a new ecosystem that we will build to rapidly open small music datasets and low-resource AI approaches to more wider use in music making and analysis.

Panel Discussion [youtube recording]
Challenges and Opportunities for Music Creation
Panelists:
  • François Pachet (Founder)
  • Rebecca Fiebrink (University of the Arts London)
  • Nuno Correia (Tallinn University)
  • Phoenix Perry (University of the Arts London)
Moderator:
  • Nick Bryan-Kinns (University of the Arts London)
Panel Discussion (hybrid) [youtube recording]
The Future of Music Creation
Panelists:
  • Paul McCabe (Roland and AI For Music)
  • Hazel Savage (SoundCloud and Musiio)
  • Daisy Rosenblum (University of British Columbia)
  • CJ Carr (Dadabots)
Moderator:
  • Nick Bryan-Kinns (University of the Arts London)
Live Performances
  • digital selves
  • Portrait XO
  • Dadabots
  • Gabriel Vigliensoni
Case Study Presentations
  • Gabriel Vigliensoni [PDF]
  • Daisy Rosenblum
  • Benjamin Timms [PDF]
  • Soumya Sai Vanka [PDF]
  • Rikard Lindell [PDF]
  • Andrea Martelloni [PDF]
  • Julian Parker [PDF]
  • Lizzie Wilson [PDF]
  • Andrei Coronel [PDF]
  • Mark Gotham [LINK]
  • Yiwei Ding [VIDEO]
  • Moisés Horta Valenzuela [VIDEO]
  • Louis McCallum [PDF]

Photos


Project Team

Lead: Prof. Nick Bryan-Kinns (University of the Arts London, UK; UAL)
Prof. Rebecca Fiebrink (UAL)
Dr. Phoenix Perry (UAL)
Anna Wszeborowska (UAL)
Prof. Zijin Li (Central Conservatory of Music, China; CCoM)
Dr. Nuno Correia (Tallinn University, Estonia; TU)
Dr. Alex Lerch (Georgia Tech, USA; GT)
Prof. Sid Fels (University of British Columbia, Canada; UBC)
Dr. Gabriel Vigliensoni (Concordia University, Canada; CU)
Dr. Andrei Coronel and Dr. Raphael Alampay (Ateneo de Manila University, Philippines; AdMU)
Prof. Rikard Lindell (Dalarna University, Sweden; DU)

Partners

Music Hackspace (UK)
DAACI (UK)
Steinberg (Germany)
Bela (UK)

Objectives

  • To bring together and grow the international community of researchers, creative practitioners, and AI experts interested in using musical genres marginalised by current AI systems (AI marginalisation) as datasets for AI music making practice and research.
  • To establish an open repository of marginalised musical genre datasets for use in AI.
  • To bring together and make available methods and tools for how artists might use techniques such as low-resource deep learning models to generate music using marginalized music genres.
  • To commission a small number of speculative artistic projects resulting in international showcase of generative AI music using marginalised musical genres.
  • To explore the translational potential of the AI techniques identified in this project to other creative practices.

Contact

To get involved please contact Prof. Nick Bryan-Kinns n.bryankinns@arts.ac.uk

Funding

Funded by Responsible Artificial Intelligence (RAI) UK International Partnerships (UKRI EPSRC grant reference EP/Y009800/1)


Template: HTML5 UP