COMPUTING AND AI: HUMANISTIC PERSPECTIVES FROM MIT

Music Eran Egozy
   


Eran Egozy '95, MNG '95; MIT Professor of the Practice in Music Technology

"Through the MIT Schwarzman College of Computing, our responsibility will be not only to develop the new technologies of music creation, distribution, and interaction, but also to study their cultural implications and define the parameters of a harmonious outcome for all."

— Eran Egozy, MIT Professor of the Practice in Music Technology



Series | Computing and AI: Humanistic Perspectives from MIT

 

Eran Egozy '95, MNG '95, is a professor of the practice in music technology at MIT as well as an entrepreneur, musician, and technologist. He is the co-founder and chief scientist of Harmonix Music Systems, which developed the video game franchises Guitar Hero and Rock Band, games that have generated over $1 billion in annual sales. His research and teaching center on interactive music systems, music information retrieval, and multimodal musical expression and engagement.

. . .

 

Q: What are the challenges and opportunities of advanced computing for the field of music, and how might the Schwarzman College of Computing best address them?

Music is woven into the fabric of every culture worldwide. Though abstract as an art form, it elicits powerful emotions and is deeply personal, serving as a system of communication and self-expression. While the casual listener may not think of music in technical terms, its creation, performance, and evolution have always been intimately tied to technology and technological advances.

All musical instruments are technical accomplishments, fabricated using particular materials, technical methods and skills, and designed to produce certain sounds — but music also connects deeply to math, physics, and other fields. Consider that Pythagoras linked the mathematics of string vibrations to notes of the musical scale. Cristofori’s invention of the piano’s hammer-action mechanism forever changed Western music.

Even Mozart toyed with computation in “Musikalisches Wurfelspiel” ("Dice Music"), an algorithmic music generator. More recently, the development of recorded music has led to seismic shifts in music distribution, from vinyl records to CDs to MP3s, and now to streaming services that allow anyone with an internet connection to access a large portion of the world's recorded music.

Enter MIT’s Schwarzman College of Computing (SCC) — dedicated not only to research and education in computer science, but also to advancing the relationship between computation and everything else. MIT now has the opportunity and the responsibility to explore and define how music can evolve in our new world of ubiquitous computation, pervasive connectivity, and artificial intelligence (AI).
 



Video by Melanie Gonick, MIT News

Action
"Creating tomorrow’s music systems responsibly will require a truly multidisciplinary education, one that covers everything from scientific models and engineering challenges to artistic practice and societal implications."



In the field of digital music signal processing, computational approaches to both the analysis and synthesis of music have flourished over the past 20 years. The field of music information retrieval focuses on music models and algorithms that enable machines to derive higher levels of meaning from music. As in other areas of artificial intelligence, some programs are designed to mimic what humans easily do, like identify a song's downbeats, locate its section changes and key changes, or classify its mood and genre.

But others can accomplish what humans cannot, such as quickly and correctly identifying one of the millions of songs played on the radio (Shazam), or considering all the notes Beethoven ever wrote and illuminating the changing patterns of his compositions as he became deaf (using the MIT-created music21 toolkit, showing that the average pitch over his lifetime goes up!).

Music streaming services (Pandora, Spotify, Apple Music) have moved beyond the technical challenge of seamlessly streaming millions of songs to millions of devices. Now they focus on music discovery, playlist creation, activity-based listening (selecting the right music for a particular activity, such as running or meditating), and the social networks of music consumption. With troves of data (both audio signals and the associated consumer listening patterns), new AI algorithms will increasingly make it possible to custom-tailor music to each individual.

Music recordings have moved beyond fixed media. Fifteen years ago, songs became interactive through gaming platforms such as Guitar Hero and Rock Band, allowing users to experience a song's individual parts and understand how they fit together. Collectively, we are now creating music in new ways, as technology enables non-musicians to contribute to larger works.

In Eric Whitacre's ”Virtual Choirs,” the voices of thousands of participants across the internet are mixed into an ethereal whole. In Tod Machover’s “City Symphonies,” entire communities become composers as they contribute sounds and ideas through their cell phones. In "Tutti," a work I created with Evan Ziporyn, the audience becomes the orchestra as each audience member uses his or her phone as a musical instrument to experience playing music together.
 



Eran Egozy and student, Interactive Music Systems class; photo by Jon Sachs

"The new music technology will be accompanied by difficult questions. Who owns the output of generative music algorithms that are trained on human compositions? How do we ensure that music, an art form intrinsic to all humans, does not become controlled by only a few?"



Computational music research communities are developing deep learning systems that enable machines to compose music automatically. Recently, Google published a Google Doodle that composes four-part chorales by training on Bach’s compositions. The company Melodrive is creating an adaptive music-generation engine for video games where high-level parameters like "happiness," "scariness," or "intensity" alter the musical output in real-time. And, while today these machine composers still sound like machines, their sophistication and quality will only grow.

Creating tomorrow’s music systems responsibly will require a truly multidisciplinary education, one that covers everything from scientific models and engineering challenges to artistic practice and societal implications. Our connected computing ecosystem will blur the boundaries between composer and listener, performer and audience. Vast computational resources and new computational methods will be used to process and learn from recorded music, symbolic music (e.g., score notation), and multimedia streams (e.g., music videos, live concerts).

This new technology will be accompanied by difficult questions. How do we overcome biases inherent in our current music data sets? Who owns the output of generative music algorithms that are trained on human compositions? How do we ensure that music, an art form intrinsic to all humans, does not through technology become controlled by only a few?

The future is both daunting and exciting. Through the SCC, our responsibility will be not only to develop the new technologies of music creation, distribution, and interaction, but also to study their cultural implications and define the parameters of a harmonious outcome for all.

 

Suggested links

Series | Computing and AI: Humanistic Perspectives from MIT

Eran Egozy website

MIT Music and Theatre Arts

MIT Music | MIT Music subjects

Class: 21M.385/6.809 - Interactive Music Systems

The Listening Room
The finest works from MIT music online

Musical Institute of Technology
A book about music and the MIT mission

MIT Schwarzman College of Computing
 

Related Stories

Story: Music technology accelerates at MIT
An increasingly popular program is drawing students eager to build, and use, the next generation of tools for making music.

Story: Harmonix co-founder returns to MIT as Professor of the Practice in Music Technology

Story: Gamma sonification
MIT students make music from particle energy (essentially, music from the energy of the big bang).
 

Interview: A world-premiere concert where the audience helps play
MIT’s Eran Egozy on “12,” a chamber music debut with smartphone-driven percussion

Video: Prof. Egozy on PBS NOVA

Video: The science of how playing an instrument benefits the brain
"When you listen to music, multiple areas of your brain become engaged and active. But when you actually play an instrument, that activity becomes more like a full-body brain workout. What's going on?"

MIT Arts | Sound and Technology
 

 


Series prepared by MIT SHASS Communications
Office of Dean Melissa Nobles
MIT School of Humanities, Arts, and Social Sciences
Series Editor and Designer: Emily Hiestand, Director of Communications
Published 22 September 2019