X
Business

Novel music search project creates method for intrinsic, not semantic, classifications

We use some techniques very similar to what a speech recognizer does. It will take the audio and will run signal processing algorithms over it and try to extract out some key features that describe the music. We then use some machine-learning techniques basically to teach this system how to recognize music that is both similar and dissimilar. So at the end, we have a music similarity model and this is the neat part. We can then use this music similarity model to recommend music that sounds similar to music that you already like.
Written by Dana Gardner, Contributor

Read the full transcript.

Musical tastes are not easy to classify with just a few tags. What gets one person's foot tapping and their heart thumping is not going to win over their cousin necessarily. Moods also swing or swoon to a fickle beat, and the music you may want to match or detach your mood can't be easily conjured from a rudimentary playlist or a limited catalog.

It's on these premises that a researcher at Sun Microsystems Laboratories has developed the Search Inside the Music project, which digs deeply into the music itself to classify the actual characteristics of music, identifying patterns and traits, and then associating that with other like music.

Instead of using lists of metadata about the music author, album, or composer, etc., listeners can seek the types of music they like based on the sound alone -- and get it, culled from vast stores of musical possibilities. I enjoyed a recent visit, along with blogger colleague David Berlind, to the Sun labs facility at Sun's campus in Burlington, Mass. to learn more about music search.

Project Principal Investigator Paul Lamere shared some very cool visual tools to help chart and map musical styles, and then extend that into a whole new way of ordering up what you would enjoy listening -- whether you're aware of the music or artist, or not. Such a capability also bodes well for new artists to enter into the listening mainstream -- or discreet niches. New music could be analyzed and instantly made available for those seeking and searching for those unique qualities. No hit parade required.

So I followed up with Paul to record this non-sponsored podcast.

Here are some excerpts:

Shuffle play is great if you have only a few hundred songs that you can pick and put on there, but your iPod is a lot like mine. It has 5,000 songs and it also has my 11-year-old daughter’s high-school musical and DisneyMania tracks. I have Christmas music and some tracks I really don’t want to listen to.

When I hit shuffle play, sometimes those come up. Also with shuffle play, you end up going from something like Mozart to Rammstein. I call that the iPod whiplash. A system that understands a little bit about the content of the music can certainly help you generate playlists that are easier to listen to and also help you explore your music collection.

We looked at 5,000 users and saw that the 80-20 rule really applies to people’s music collections: 80 percent of their listening time is really concentrated in about 20 percent of their music. In fact, we found that these 5,000 users had about 25 million songs on their iPods and we found that 63 percent of the songs had not been listened to even once. So, you can think of your iPod as the place that your music goes to die, because once it’s there, the chances are you will never listen to it again.

... We’re taking a look at some alternative ways to help weed through this huge amount of music. One of the things that we’re looking at is the idea of doing content-based recommendation. Instead of relying on just the Wisdom of Crowds -- actually rely on the audio content.

We use some techniques very similar to what a speech recognizer does. It will take the audio and will run signal processing algorithms over it and try to extract out some key features that describe the music. We then use some machine-learning techniques basically to teach this system how to recognize music that is both similar and dissimilar. So at the end, we have a music similarity model and this is the neat part. We can then use this music similarity model to recommend music that sounds similar to music that you already like.

We can now take our music collection and essentially toss it into a 3D space based on music similarity, and give the listener a visualization of the space and actually let them fly through their collection. The songs are clustered based on what they sound like. So you may see one little cluster of music that’s your punk and at the other end of the space, trying to be as far away from the punk music, might be your Mozart.

Using this kind of visualization gives you a way of doing interesting things like exploring for one thing, or seeing your favorite songs or some songs that you forgot about. You can make regular playlists or you can make playlists that have trajectories. If you want to listen to high-energy music while driving home from work, you can play music in the high-energy, edgy space part of your music space.

If you like to be mellowed out by the time you get home, you have a playlist that takes you gradually from hard-driving music to relaxing and mellow music by the time you pull into the driveway.

When you have millions of songs out there, some that people just haven’t listened to, you have no basis for recommending music. So, you end up with this feedback where, because nobody’s listening to the music, it’s never going to be recommended -- and because it’s never recommended, people won’t be listening to the music.

And so there is no real entry-point for these new bands. You end up once again with the short head, where you have U2 and The Beatles who are very, very popular and are recommended all the time because everyone is listening to them. But there is no entry point for that garage band.

Listen to the podcast, or read the full transcript for more on music search.

Editorial standards