X
Tech

Researchers rethink approaches to computer vision

Vast computing power is but one requisite to achieving an artificial visual system that is truly perceptive, and it's by far been easier to deliver than the other key component - the mimicry of biological neural processing. The challenge has led to neuroscientists and roboticists to re-frame their approaches.
Written by Chris Jablonski, Inactive

Intel announced yesterday a 48-core chip that packs 1.3 billion transistors on a single processor. The computing power, according to Justin Rattner, the company's Chief Technology Officer, will pave the way to machines that "see and hear and probably speak and do a number of other things that resemble human-like capabilities."

But vast computing power is but one requisite to achieving an artificial visual system that's truly perceptive, and it's by far been easier to deliver than the other key component - the mimicry of biological neural processing.

"Reverse engineering a biological visual system—a system with hundreds of millions of processing units—and building an artificial system that works the same way is a daunting task," said David Cox, Principal Investigator of the Visual Neuroscience Group at the Rowland Institute at Harvard. "It is not enough to simply assemble together a huge amount of computing power. We have to figure out how to put all the parts together so that they can do what our brains can do."

The challenge has led neuroscientists and roboticists to re-framed approaches. For instance, European researchers have recently developed an algorithm that enables a robot to combine data from both sound and vision to enable depth perception and to help isolate objects.

Back in the U.S., David Cox (mentioned above) and Nicolas Pinto, a Ph.D. candidate at MIT and their team of Harvard and MIT researchers have recently demonstrated a way to build better artificial visual systems by combining screening techniques from molecular biology with low-cost high-performance gaming hardware donated by NVIDIA.

Below is an image of the 16-GPU 'monster' supercomputer built at the DiCarlo Lab (McGovern Institute for Brain Research at MIT) and the Cox Lab (Rowland Institute at Harvard University) to help build artificial vision systems. According to an article, the 18" x18" x18" cube may be one of the most compact and inexpensive supercomputers in the world.

monster16gpuimg04901-300x225.jpg

(Credit: Nicolas Pinto / MIT)

The team drew inspiration from genetic screening techniques whereby a multitude of candidate organisms or compounds are screened in parallel to find those that have a particular property of interest.  So instead of building a single model and seeing how well it could recognize visual objects, they constructed thousands of candidate models, and screened for those that performed best on an object recognition task.

Their models outperformed a crop of state-of-the-art computer vision systems across a range of test sets, more accurately identifying a range of objects on random natural backgrounds with variation in position, scale, and rotation.

"Reverse and forward engineering the brain is a virtuous cycle. The more we learn about one, the more we can learn about the other," says Cox. "Tightly coupling experimental neuroscience and computer engineering holds the promise to greatly accelerate both fields."

The video below illustrates the computer vision challenge and the researchers' approach:

Finding a better way for computers to "see" from Cox Lab @ Rowland Institute on Vimeo.

Source: PLoC Computational Biology: A High-Throughput Screening Approach to Discovering Good Forms of Biologically Inspired Visual Representation

Editorial standards