How our brain sees objects in 3-D

How our brain sees objects in 3-D

Summary: Neuroscientists at Johns Hopkins University (JHU) have discovered how we see objects in depth. Even if computers are better than humans in chess games, they can't beat us in the field of object recognition. This JHU research work 'suggests that higher-level visual regions of the brain represent objects as spatial configurations of surface fragments, something like a structural drawing.' This project could lead to for better treatments for patients with perceptual disorders. More surprisingly, this approach could be used in museums to allow visitors to 'view a series of computer-generated 3-D shapes and rate them aesthetically.' But read more...

SHARE:
TOPICS: Hardware, CXO
0

Neuroscientists at Johns Hopkins University (JHU) have discovered how we see objects in depth. Even if computers are better than humans in chess games, they can't beat us in the field of object recognition. This JHU research work 'suggests that higher-level visual regions of the brain represent objects as spatial configurations of surface fragments, something like a structural drawing.' This project could lead to for better treatments for patients with perceptual disorders. More surprisingly, this approach could be used in museums to allow visitors to 'view a series of computer-generated 3-D shapes and rate them aesthetically.' But read more...

How we see objects in depth

You can see above how we see objects in depth. "To illustrate how complex three-dimensional shape could be encoded at the population level, five two-Gaussian tuning models (red, green, blue, cyan and magenta) from our neural sample are projected onto a three-dimensional rendering (right) of the larger figure in Henry Moore's "Sheep Piece" (1971-1972, left; reproduced by permission of the Henry Moore Foundation, http://www.henry-moore-fdn.co.uk). Tuning models were scaled and rotated to optimize correspondence. A small number of neurons representing surface fragment configurations would uniquely specify an arbitrary three-dimensional shape of this kind and would carry the structural information required for judging its physical properties, functionality (or lack thereof) and aesthetic value." (Credit: Johns Hopkins University/Nature Neuroscience) Here is a link to a larger version of this image.

This research work has been led by Yukako Yamane, a postdoctoral fellow in the Mind/Brain Institute directed by Charles (Ed) Connor, an associate professor in the Department of Neuroscience of the Johns Hopkins University.

How did the researchers work? They've "trained two rhesus monkeys to look at a computer monitor while 3-D pictures of objects were flashed on the screen. At the same time, the researchers recorded electrical responses of individual neurons in higher-level visual regions of the brain. A computer algorithm was used to guide the experiment gradually toward object shapes that evoked stronger responses. This evolutionary stimulus strategy let the experimenters pinpoint the exact 3-D shape information that drove a given cell to respond."

Obviously, these findings "on object coding in the brain have implications for treating patients with perceptual disorders. In addition, they could inform new approaches to computer vision. Connor also believes that understanding neural codes could help explain why visual experience feels the way it does, perhaps even why some things seem beautiful and others displeasing. 'In a sense, artists are neuroscientists, experimenting with shape and color, trying to evoke unique, powerful responses from the visual brain,' Connor said."

In fact, the JHU team plans to collaborate with Gary Vikan, the director of the Walters Art Museum in Baltimore, who is "a strong believer in the power of neuroscience to inform the interpretation of art."

Here is a short quote about this future application in museums. "'My interest is in finding out what happens between a visitor's brain and a work of art,' said Vikan. 'Knowing what effect art has on patrons' brains will contribute to techniques of display -- lighting and color and arrangement -- that will enhance their experiences when they come into the museum.' The plan is to let museum patrons view a series of computer-generated 3-D shapes and rate them aesthetically. The same computer algorithm will be used to guide evolution of these shapes; in this case, based on aesthetic preference."

This research work has been published in Nature Neuroscience under the title "A neural code for three-dimensional object shape in macaque inferotemporal cortex" (November 2008, Volume 11, Number 11, Pages 1352-1360). It's even on the cover of the November 2008 issue.

Here is an excerpt from the abstract. "We used an evolutionary stimulus strategy and linear/nonlinear response models to characterize three-dimensional shape responses in macaque monkey inferotemporal cortex (IT). We found widespread tuning for three-dimensional spatial configurations of surface fragments characterized by their three-dimensional orientations and joint principal curvatures. Configural representation of three-dimensional shape could provide specific knowledge of object structure to support guidance of complex physical interactions and evaluation of object functionality and utility."

As you can guess, you need to open your wallet to read this article from the link above. But Nature Neuroscience accepted this paper as an advance online publication and made it available on October 5, 2008. And you still can read it both in an HTML version or as a PDF document (10 pages, 992 KB) -- probably for a limited time.

However, here are two additional links to the igures featured in the article and to 17 additional images provided in this supplementary info (PDF format, 17 pages, 2.40 MB).

Sources: Johns Hopkins University news release, October 28, 2008; and various websites

You'll find related stories by following the links below.

Topics: Hardware, CXO

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

0 comments
Log in or register to start the discussion