X
Tech

Finding where a photo has been shot

It's not the first time that Carnegie Mellon University (CMU) researchers play with Flickr image collections. Last year, they've used Flickr to edit our photos. Now, they're trying to estimate the geographic location of a single photo by comparing it to a database of 6 million pictures accurately geolocated by Flickr users. According to the researchers, their method allowed them to correctly guess the location of an unknown photo 30 times better than with pure luck (how did they measure this?). Even if their system seems to work well, it is not perfect yet. For example, the 'architecturally unique Sydney Opera House seemed to the computer to be similar to a hotel in Mississippi as well as a bridge in London.' But read more...
Written by Roland Piquepaille, Inactive

It's not the first time that Carnegie Mellon University (CMU) researchers play with Flickr image collections. Last year, they've used Flickr to edit our photos. Now, they're trying to estimate the geographic location of a single photo by comparing it to a database of 6 million pictures accurately geolocated by Flickr users. According to the researchers, their method allowed them to correctly guess the location of an unknown photo 30 times better than with pure luck (how did they measure this?). Even if their system seems to work well, it is not perfect yet. For example, the 'architecturally unique Sydney Opera House seemed to the computer to be similar to a hotel in Mississippi as well as a bridge in London.' But read more...

CMU's IM2GPS project: Paris

As you can see above, the algorithm correctly matched an image query of the Cathedral of Notre Dame in Paris (left) with its nearest neighbors (surrounded by yellow frames on the right). (Credit: CMU)

CMU's IM2GPS project: Tanzania

However, for an image taken in Tanzania, the algorithm gave a location in Kenya as the most probable. (Credit: CMU) But if you ever visited these two countries, you'll agree with me that the African parks in these two countries have many similarities.

So, how does this algorithm work? "The IM2GPS algorithm developed by computer science graduate student James Hays and Alyosha Efros, assistant professor of computer science and robotics, doesn't attempt to scan a photo for location clues, such as types of clothing, the language on street signs, or specific types of vegetation, as a person might do. Rather, it analyzes the composition of the photo, notes how textures and colors are distributed and records the number and orientation of lines in the photo. It then searches Flickr for photos that are similar in appearance."

Here are some quotes from the two computer scientists. "'We're not asking the computer to tell us what is depicted in the photo but to find other photos that look like it,' Efros said. 'It was surprising to us how effective this approach proved to be. Who would have guessed that similarity in overall image appearance would correlate to geographic proximity so well?' [...] 'It seems there's not as much ambiguity in the visual world as you might guess,' said Hays. 'Estimating geographic information from images is a difficult, but very much a doable, computer vision problem.'"

For more information, here is a link to the IM2GPS project home page. Here are some of the goals of the project. "Estimating geographic information from an image is an excellent, difficult high-level computer vision problem whose time has come. The emergence of vast amounts of geographically-calibrated image data is a great reason for computer vision to start looking globally -- on the scale of the entire planet! In this paper, we propose a simple algorithm for estimating a distribution over geographic locations from a single image using a purely data-driven scene matching approach. For this task, we will leverage a dataset of over 6 million GPS-tagged images from the Internet."

The two researchers will present their results on June 26 in the Poster Session P3P-1 at the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2008), held in Anchorage, Alaska, on June 24-26, 2008. Here is a link to their paper, "IM2GPS: estimating geographic information from a single image" (PDF format, 8 pages, 11.38 MB), from which the above images have been extracted.

Here is an excerpt from the conclusions. "We believe that estimating geographic information from images is an excellent, difficult, but very much doable high-level computer vision problem whose time has come. The emergence of so much geographically-calibrated image data is an excellent reason for computer vision to start looking globally – on the scale of the entire planet! [...] In conclusion, this paper is the first to be able to extract geographic information from a single image. It is also the first time that a truly gargantuan database of over 6 million geolocated images has been used in computer vision. While our results look quite promising, much work remains to be done. We hope that this work might jump-start a new direction of research in geographical computer vision."

Sources: Carnegie Mellon University, June 18, 2008; and various websites

You'll find related stories by following the links below.

Editorial standards