Google isn't new to using AI for creating music, launching its MusicLM in January to generate music from text. Now Google has upped the ante and is using AI to read your brain -- and produce sound based on your brain activity.
In a new research paper,Brain2Music, Google uses AI to reconstruct music from brain activity as seen through functional magnetic resonance imaging (fMRI) data.
Researchers studied the fMRI data collected from five test subjects who listened to the same 15-second music clips across different genres, including blues, classical, country, disco, hip-hop, jazz, metal, pop, reggae, and rock.
Then they used that data to train a deep neural network to learn about the relationship between brain activity patterns and different elements of music, such as rhythm and emotion.
Once trained, the model could reconstruct music from an fMRI employing the use of MusicLM. Since MusicLM generates music from text, it was conditioned to create music similar to the original music stimuli on a semantic level.
When put to the test, the generated music resembled the musical stimuli that the participant initially listened to in features such as genre, instrumentation, mood, and more.
On the research page's site, you can listen to several clips of the original music stimuli and compare them to the reconstructions that MusicLM generated. The results are pretty incredible.