The way it works now is that you type in a word, and if you're lucky and a person somewhere online has uploaded an image labeled with your search word, then you'll see that image in your search results.
But what if the search engine didn't have to rely on people labeling images correctly? What if the computer could just recognize images itself, the way people do?
A number of researchers have used the database to test their images as it developed over the last few years. This summer, when two Google researchers tested it they found it worked twice as well as other "neural networks," or systems that try to mimic human brain functions.
ImageNet could be the ticket to solving one of the trickiest problems facing people working on artificial intelligence: getting robots to learn to recognize the way humans do.
How it was assembled
In creating the database, Li faced the challenge of the rapidly expanding number of images online.
“In age of the Internet, we are suddenly faced with an explosion in terms of imagery data,” Li told The New York Times. “Facebook has 200 billion images, and people are now uploading 72 hours of new video every minute on YouTube.”
She crowdsourced help with Amazon.com's Mechanical Turk, which asks humans to do tasks that computers are incapable of. The Times reports:
Each year, ImageNet employs 20,000 to 30,000 people who are automatically presented with images to label, receiving a tiny payment for each one. The average turker can identify about 250 images in five minutes. The ImageNet database now has 14,197,122 images, indexed into 21,841 categories.
Related on SmartPlanet:
via: The New York Times
photo: screenshot from ImageNet
This post was originally published on Smartplanet.com