Researchers rethink approaches to computer vision

Researchers rethink approaches to computer vision

Summary: Vast computing power is but one requisite to achieving an artificial visual system that is truly perceptive, and it's by far been easier to deliver than the other key component - the mimicry of biological neural processing. The challenge has led to neuroscientists and roboticists to re-frame their approaches.

SHARE:
TOPICS: Hardware, CXO
8

Intel announced yesterday a 48-core chip that packs 1.3 billion transistors on a single processor. The computing power, according to Justin Rattner, the company's Chief Technology Officer, will pave the way to machines that "see and hear and probably speak and do a number of other things that resemble human-like capabilities."

But vast computing power is but one requisite to achieving an artificial visual system that's truly perceptive, and it's by far been easier to deliver than the other key component - the mimicry of biological neural processing.

"Reverse engineering a biological visual system—a system with hundreds of millions of processing units—and building an artificial system that works the same way is a daunting task," said David Cox, Principal Investigator of the Visual Neuroscience Group at the Rowland Institute at Harvard. "It is not enough to simply assemble together a huge amount of computing power. We have to figure out how to put all the parts together so that they can do what our brains can do."

The challenge has led neuroscientists and roboticists to re-framed approaches. For instance, European researchers have recently developed an algorithm that enables a robot to combine data from both sound and vision to enable depth perception and to help isolate objects.

Back in the U.S., David Cox (mentioned above) and Nicolas Pinto, a Ph.D. candidate at MIT and their team of Harvard and MIT researchers have recently demonstrated a way to build better artificial visual systems by combining screening techniques from molecular biology with low-cost high-performance gaming hardware donated by NVIDIA.

Below is an image of the 16-GPU 'monster' supercomputer built at the DiCarlo Lab (McGovern Institute for Brain Research at MIT) and the Cox Lab (Rowland Institute at Harvard University) to help build artificial vision systems. According to an article, the 18" x18" x18" cube may be one of the most compact and inexpensive supercomputers in the world.

(Credit: Nicolas Pinto / MIT)

The team drew inspiration from genetic screening techniques whereby a multitude of candidate organisms or compounds are screened in parallel to find those that have a particular property of interest.  So instead of building a single model and seeing how well it could recognize visual objects, they constructed thousands of candidate models, and screened for those that performed best on an object recognition task.

Their models outperformed a crop of state-of-the-art computer vision systems across a range of test sets, more accurately identifying a range of objects on random natural backgrounds with variation in position, scale, and rotation.

"Reverse and forward engineering the brain is a virtuous cycle. The more we learn about one, the more we can learn about the other," says Cox. "Tightly coupling experimental neuroscience and computer engineering holds the promise to greatly accelerate both fields."

The video below illustrates the computer vision challenge and the researchers' approach:

Finding a better way for computers to "see" from Cox Lab @ Rowland Institute on Vimeo.

Source: PLoC Computational Biology: A High-Throughput Screening Approach to Discovering Good Forms of Biologically Inspired Visual Representation

Topics: Hardware, CXO

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

8 comments
Log in or register to join the discussion
  • So some of today's most brilliant minds,

    coupled with some of the most "advanced" technology that those minds can conceive are struggling to make a facsimile of the human visual system: a system that somehow "miraculously" evolved over eons of time with no intelligent input or direction, just squillions of blind (pun intended) "trial and error" attempts. Is that what biological evolutionists are telling us and would have us believe? And the fact that we have to put so much effort and INFORMATION into CREATING an artificial facsimile - which is still monumentally less than the real thing - doesn't give the same people any pause at all in ascribing the marvels of the human body to random chance? And the fact that there aren't enough seconds in the history of the universe (presuming the current figure of 9 billion-odd 365-day years) to even allow for the successful development, using random, mindless, undirected evolution, of even the necessary proteins to make the DNA to make a single cell is just by the by?

    Ah, the lengths we'll go to in order to deny God's existence and all that His very nature / purpose (relationship with us) suggests!
    IslandBoy_77
    • Doh!

      Did you actually understand what you read in this story?

      They are trying multiple vision algorithms and letting natural selection pick what works best for object recognition. This is evolution in action, not some proof of creation.
      wkulecz
    • What!

      What part of the article involved believing mythical being?
      I don't do science in your religious (mythology) blogs so don't do mythology in science blogs.
      Agnostic_OS
    • RE: Researchers rethink approaches to computer vision

      KAKKOII Ne~~ Yamacchan with the dog how cute ~~~ ;_; <a href="http://www.replicawatchesbest.org">replica watches best</a>
      meimeili
  • RE: Researchers rethink approaches to computer vision

    Interesting but old approach, and whats the point in object recognition on a prerequisite??? it still sounds (appears) to me like this method requires a database of pre-defined objects to draw comparisons (AI - Artificial Intelligence).

    Intelligence is real, all be it, intangible. Understanding why its real is the key.
    mrjoctave@...
    • "Understanding" may be programmable

      Mrjocatve@...,

      Great point, the AI component is a challenge outside the scope of the post, albeit a logical next question. But for every day service robots serving habitual tasks, patterns in the field of vision can be "learned" and acted upon.

      Chris
      christopher_jablonski
      • i think it is...

        I understand what your saying in regards to everyday service robots, but i still say whats the point... unless we need to see what the robot is seeing.

        Using standard camera's as a visionary aid to robots just seems feeble in comparison to the possiblities of modelling vision based on a range of techniques and technologies i.e ultra sonics for gauging distants and size, infra red to ultra voilet spectrums to identify, compounds/properties..... and a camera so i can see what the robot see's in real-time (or recorded).

        There's a need for modelling even when using a camera (processing the images to define edges and relative distance for instance) so why waste resources on restricting a robot to see how we see when it can do and be so much more.

        Another notion is based on the fact that most things we interact with (other then nature) are tagged and serialised so identification doesn't even need to be based on a visual concept, rf technologies come to mind.

        PS: i do believe/know it is possible to program understanding, all (well almost all) be it, theoretical, but ask me in a few years time, i might just have a more definitive answer.
        mrjoctave@...
  • RE: Researchers rethink approaches to computer vision

    If we end up with a robot that can find my "lost" glasses - great!


    Happy festering season, and
    a preposterous New Year
    Agnostic_OS