X
Innovation

Google's new AI model generates music from your brain activity. Listen for yourself

The music you listen to creates specific brain patterns that AI can use to generate that same sound. Here's how.
Written by Sabrina Ortiz, Editor
soundwaves
MR.Cole_Photographer/Getty Images

Google isn't new to using AI for creating music, launching its MusicLM in January to generate music from text. Now Google has upped the ante and is using AI to read your brain -- and produce sound based on your brain activity. 

In a new research paper,Brain2Music, Google uses AI to reconstruct music from brain activity as seen through functional magnetic resonance imaging (fMRI) data. 

Also: How I used ChatGPT to write a custom JavaScript bookmarklet 

Researchers studied the fMRI data collected from five test subjects who listened to the same 15-second music clips across different genres, including blues, classical, country, disco, hip-hop, jazz, metal, pop, reggae, and rock. 

Then they used that data to train a deep neural network to learn about the relationship between brain activity patterns and different elements of music, such as rhythm and emotion. 

Once trained, the model could reconstruct music from an fMRI employing the use of MusicLM. Since MusicLM generates music from text, it was conditioned to create music similar to the original music stimuli on a semantic level. 

When put to the test, the generated music resembled the musical stimuli that the participant initially listened to in features such as  genre, instrumentation, mood, and more. 

On the research page's site, you can listen to several clips of the original music stimuli and compare them to the reconstructions that MusicLM generated. The results are pretty incredible.

Also: You can now chat with a famous AI character on Viber. Here's how 

For one clip, the stimulus was a 15-second clip of the iconic "Oops!...I Did It Again" by Britney Spears. The three reconstructions were poppy and upbeat in nature, like the original. 

The audio, of course, did not resemble that of the original since the study focuses on the different elements of the music, not the lyrical component. 

Essentially, the model can read your mind (technically your brain patterns) to produce music similar to what you were listening to.

Editorial standards