X
Business

The sound of silence

Researchers at Rockefeller University have developed an algorithm based on the way our ears process sound that provides a better way to analyze noise than current methods. Their algorithm is based on a very non-intuitive fact. They know what a sound was by knowing when there was no sound. Read more...
Written by Roland Piquepaille, Inactive

Researchers at Rockefeller University have built a mathematical method and written an algorithm based on the way our ears process sound that provides a better way to analyze noise than current methods. Not only their algorithm is faster and more accurate than previous ones used in speech recognition or in seismic analysis, it's also based on a very non-intuitive fact. They know what a sound was by knowing when there was no sound. "In other words, their pictures were being determined not by where there was volume, but where there was silence." The researchers think that their algorithm can be used in many applications and that it will soon give computers the same acuity as human ears. Read more...

Let's start with some facts about our sensory system.

Humans have 200 million light receptors in their eyes, 10 to 20 million receptors devoted to smell, but only 8,000 dedicated to sound. Yet despite this miniscule number, the auditory system is the fastest of the five senses. Researchers credit this discrepancy to a series of lightning-fast calculations in the brain that translate minimal input into maximal understanding. And whatever those calculations are, they're far more precise than any sound-analysis program that exists today.

This is the problem that Marcelo Magnasco, professor and head of the Mathematical Physics Laboratory at Rockefeller University, decided to tackle with the help of one of his former students, Tim Gardner. And they developed a new algorithm which transforms sound into a visual representation far better than current methods do.

"This outperforms everything in the market as a general method of sound analysis," Magnasco says. In fact, he notes, it may be the same type of method the brain actually uses.

And here is why their method differs from current ones: they can visualize the areas in which there was no sound at all. Here is a short explanation.

The two researchers used white noise -- hissing similar to what you might hear on an un-tuned FM radio -- because it’s the most complex sound available, with exactly the same amount of energy at all frequency levels. When they plugged their algorithm into a computer, it reassigned each tone and plotted the data points on a graph in which the x-axis was time and the y-axis was frequency.
The resulting histograms showed thin, froth-like images, each "bubble" encircling a blue spot. Each blue spot indicated a zero, or a moment during which there was no sound at a particular frequency. "There is a theorem," Magnasco says, "that tells us that we can know what the sound was by knowing when there was no sound."

Let's turn to an example. Below are two images "created by a computer that reassigned a sound’s rate and frequency values using Magnasco’s new algorithm, a single-frequency tone can be seen as it cuts through a background of white noise. The bright blue spots indicate the areas in this histogram where there was no sound at all." (Credit: Marcelo Magnasco)

Detection of a signal in a background of noise #1

These images are different because of the signal strength A (equal to 0 for the image above). When the strength of the signal to analyze increases, a horizontal line starts to appear. (Credit: Marcelo Magnasco)

Detection of a signal in a background of noise #2

And where this algorithm will be used? According to Magnasco, there are many potential applications not even limited to sound analysis. Here are some examples.

It can be used for any kind of data in which a series of time points are juxtaposed with discrete frequencies that are important to pick up. Radar and sonar both depend on this kind of time-frequency analysis, as does speech-recognition software. Medical tests such as electroencephalograms (EEGs), which measure multiple, discrete brainwaves use it, too.

If you want to know more about this algorithm, the research work has been published in the Proceedings of the National Academy of Sciences under the name "Sparse time-frequency representations" (Vol. 103, No. 16, Pages 6094-6099, April 18, 2006).

Here are two links to the abstract and to the full text of this open access article. If you prefer, the paper is also available in PDF format (6 pages, 1.81 MB).

Sources: Rockefeller University news release, June 7, 2006; and various web sites

You'll find related stories by following the links below.

Editorial standards