top of page

On Creativity, Music and Artificial Intelligence: Meet Eduardo R. Miranda

  • marshallsoundz
  • Aug 18, 2016
  • 6 min read

Lots of people get on their high horse when talking about artificial intelligence in the arts… 

It’s the end of creativity! It’s killing music! It’s going to take us over!

But in fact the opposite is true. Technologies that use automation or algorithms have helped us become more creative. From DAWs to MIDI controllers, many music technologies we use on a daily basis contain some form of ‘intelligence.’

The difficult (and fascinating) questions then become: what is creativity? What is intelligence? How can we foster them to make music?

To help me answer that, I talked to one of the main figures in the field of AI and music: meet professor and composer Eduardo R. Miranda.

LANDR: Tell us about your background

ERM: I studied computing, then music composition in Brazil. I subsequently moved to the UK for my postgraduate degrees. I got my PhD from the University of Edinburgh with a thesis on Artificial Intelligence-aided sound design.


prof-miranda-piano_1200x627

After spending a few years teaching computer music at the University of Glasgow, I worked as a researcher for 5 years at the SONY Computer Science Laboratory in Paris.

While in France I also taught computer science at the American University of Paris and composition at CCMIX – the Iannis Xenakis Centre for Music Composition.

In 2003, I settled in the UK to create the Interdisciplinary Centre for Computer Music Research at Plymouth University, where I am currently a research professor.

LANDR: How did you become interested in AI and music?

ERM: After graduating in computing, I went back to college to study music. On one of my regular visits to the library, I came across a double issue of a French periodical called La Revue Musicale, entitled Iannis Xenakis et Musique Stochastique. It contained article about the work of a Romanian-born composer based in Paris, whom I had scarcely heard of before: Iannis Xenakis.

It turned out that he’s one of the most important composers in modern classical music.


IanisXenakis_900x470

Iannis Xenakis


I could barely read French at the time, but I immediately spotted Venn diagrams, set theory, logic formalisms and probability formulas – things that looked rather familiar from my previous degree.

It was a revelatory moment: I realized that I could combine my knowledge of computing with music. I got fascinated by the notion that computers could be programmed to generate music and found myself immersed in the then emerging field of AI.  The rest is history.

LANDR: How would you explain ‘Artificial Intelligence’ to someone who knows nothing about it?

ERM: Artificial Intelligence is generally explained as the art of programming computers to execute tasks that are deemed intelligent. The problem here is that intelligence is a difficult concept to define.

Until very recently, intelligence was associated with rational thinking, logics, mathematical reasoning and so on. With this definition in mind, the field of AI flourished tremendously within the last 50 years or so, with a plethora of methods to program computers to emulate intelligence.

Nowadays, however, intelligent behavior often is more strongly associated with creativity, emotions and gut feelings than with mathematics or logics. And it’s more widely accepted today that animals other than human also exhibit some form of intelligence – which does not necessarily require logics or mathematical reasoning.

Not surprisingly, the AI research community is struggling to make further progress with these more embracing notions of intelligence.

LANDR: What do you make of people’s fears around AI ‘replacing’ human creativity and labour? How do you respond to people who say that?

ERM: Humans have been developing technology to make labour easier, or even replace it, since prehistoric times. There is nothing new here. It is inevitable that replacements of this and that, here and there, will continue to take place.

“I prefer to think of AI as a means to harness humanity rather than annihilate it”

However, I prefer to think of AI as a means to harness humanity rather than annihilate it. For instance, I am interested in developing AI systems that help me to be creative. I am not interested in AI systems that compose entire pieces of music automatically.


Miranda_with-orchestra_900x446

Photo: courtesy of Plymouth University


I find pieces of music that are entirely generated by a computer rather unappealing. Instead I am interested in AI systems that will help me to create music that I would not have created otherwise. I often consider computer-generated music as seeds, or raw materials, for my compositions.

LANDR: How do you think artificial intelligence can help us understand human creativity?

ERM: AI scientists approach their research by isolating specific aspects of intelligent behavior and building models to emulate them. With such models at hand we can run experiments aimed at understanding those aspects in great detail.

“Personally, I am not so interested in understanding creativity with AI. Rather I am interested in AI to harness my creativity.”

The challenge here is that creativity is a difficult phenomena to break down. Personally, I am not so interested in understanding creativity with AI. Rather I am interested in AI to harness my creativity.

LANDR: How do you see AI potentially opening up the world of music interfaces?

LANDR: In an interview, you’ve expressed frustration about how little inventivity there is in synthesizer interface design. Over and over, we’ve kept the keyboard as our main way of interacting with complex machines like synthesizers.

ERM: I discussed this problem in depth in the book ‘New Digital Musical Instruments: Control and Interaction beyond the Keyboard’ in 2006.

A growing community of practitioners have been developing new musical interfaces since then.  A number of musician-friendly programming tools (Max, Pure Data, Python, etc.) and do-it-yourself electronic kits (Arduino, Raspberry Pi, etc.) have been developed within the past few years, which have widened musicians’ access to technology to build their own bespoke controllers. This is really exciting.


MiMu-gesture-control-glove-by-Imogen-Heap_dezeen_01_644

gesture-control gloves developed by musician Imogen Heap


However, the great majority of these controllers lack the feeling of playing a real acoustic instrument. For instance, when you play a violin you feel the vibrations of the strings and instrument on your fingers and body.

It will not be long before these practitioners get to grips with AI. Then, they will be able to develop active musical controllers.

And I am not only thinking here of musical controllers with enhanced touch-and-feedback feeling, but also intelligent musical instruments with the ability to listen, produce responses, improvise, and so on. Something for example, like George E. Lewis’ Voyager.

LANDR: How can AI can expand the ways people express their musical creativity?

LANDR: You’ve been working on a brain-computer interface that allows a group of people with neuro-disabilities to make music. How do you think AI can expand the ways people – especially living with a disability – express their musical creativity?

ERM: At present we have relatively robust technology to scan the electrical activity of the brain. This activity is referred to as the electroencephalogram, or EEG. The holy grail of brain-computer interfacing technology is to develop effective methods to train people to produce specific patterns of electrical activity voluntarily and be able to detect those patterns in the EEG.

The technology we have developed in my laboratory allows people to control musical algorithms with their EEG, but this control is still limited and only works for simple things, like toggling a few switches to play musical melodies and moving faders to increase their speed or loudness.

Such control by no means involve any musical thinking, but the desire to toggle switches and moving faders, which in effect could control anything rather than music.


My current research is looking into ways to improve this scenario. My aim is to be able to detect musical thinking in the EEG and use this to generate music. This is still a very long shot in the dark, but I think it will be possible in the future.

LANDR: What is your favourite project that you are working on at the moment?

ERM: In addition to the one mentioned above, I am looking into developing new kinds of computers for music. Computing technology has played a pivotal part in the development of the music industry over the last 80 years. It is very likely that future technological developments will continue to influence music.


I am championing research into Biocomputing for music. My team and I have already developed a proof-of-concert interactive biocomputer, which uses living organic components cultured on a circuit board.

I already composed two pieces using this system, Biocomputer Music and Biocomputer Rhythms, both of which have been performed a number of times (Watch a short documentary above).

***

Eduardo R. Miranda shows no signs of stopping. He’s recently released an ebook entitled Mind Pieces: The Inside Story of a Computer-Aided Symphony. In it, he shares his thoughts about bringing technology and intuitive musical creativity together. More information about his work is available here.

Comments


R

© 2023 by Robert Caro. Proudly created with Wix.com

bottom of page