You can spend a long time searching satellite images for interesting locations. Now imagine a tool that can not only show you the location, but scans large geographical areas to find specific features that are similar, and then it presents these results in a pattern-like format.
In 2002, Carnegie Mellon's (CMU) School of Computer Science launched what it calls the world's "first PhD program in Machine Learning". It attempted to learn how to program systems to automatically learn and use its experience to improve its results.
A group of CMU professors and students have now created a visual search tool for satellite imagery called Terrapattern.
It is an interface for finding what the team calls "more like this, please" in satellite photos. Not a company or start-up, Terrapattern is an "experimental research prototype, developed in a university setting".
It was developed at the Frank-Ratchye Studio for Creative Inquiry at Carnegie Mellon University, with support from the John S. and James L. Knight Foundation Prototype Fund.
Its creators are Golan Levin, David Newbury, and Kyle McDonald, along with Carnegie Mellon students Irene Alvarado, Aman Tiwari, and Manzil Zaheer.
Described as a "visual search engine for satellite imagery", the project uses a Deep Convolutional Neural Net (DCNN) to assist with image recognition.
The open-source, open-access project was created by a collaborative team of artists, creative technologists, and students. It is particularly useful for locating things that aren't usually indicated on maps.
The project includes a list of cities, including Pittsburgh, San Francisco, New York City, Detroit, Berlin, Miami, and Austin.
Clicking an interesting spot on Terrapattern's map will find other geographical locations that look similar to your selection. You can download a list of these locations in GeoJSON format.
The tool is useful for locating specialized "non-building structures" and other forms of otherwise "unremarkable soft infrastructure" that are not usually called out on maps.
It learns which visual features are important for classifying satellite imagery and makes these features searchable.
It then computes descriptions for millions more satellite photos that cover various regions of interest in the area. Its algorithm pre-computes relationships between the descriptions, allowing searches to take just a second or two.
You can look for all the parks in an area, search for unusual swimming pool shapes, or have a look at golf courses in the region. The project is intended to present a new way of exploring and discovering "patterns of interest" to understand and organize the world.
More locations will be added to the project "soon", but for now, have a look at all the similarities in the city you choose. Unique features are not as unique as they first seem.
- Indeed Job Search will overtake LinkedIn search soon according to report
- Does Google's RankBrain machine learning improve search results for users?
- Valossa secures funding for real time, AI powered video search technology