4 of 6Image
Foodstuffs can be identified by their packets, even if the name of the foodstuff is written in non-Latin characters, such as the Chinese pack of soup seasoning pictured above.
Businesses can use the application for inventory purposes, or to train staff to recognise different electrical components, says Accenture.
A "three-dimensional" image of an object can also be uploaded onto the phone, to look at the virtual object from different angles. The motion-tracking technology Accenture uses for this is a free library of algorithms called Open Computer Vision originally developed by Intel. This could be used to train employees about certain pieces of stock in a warehouse, for example.
Foreign languages and characters can be translated into the user's language, so a user can find out what an object is. Search results can be personalised, so the user can be alerted if a foodstuff contains a certain allergen, for example.
The phone takes a video of the object at 10 frames per second, and the images are sent to a database in real-time using "video calling", a low-latency communications medium.
The database that is used for video search can be built automatically; Accenture has written spiders to crawl the web and download images on a specific theme such as Asian food.
Linaker, pictured above, explained that to pinpoint the features necessary to identify an object, the image is run through an algorithm called Scale-Invariant Feature Transform, or SIFT, a technology developed by academic David Lowe.
The software extracts feature points from a jpeg, according to Linaker, and makes a match against images in the database at the full frame rate of the camera, which is 10 frames per second. If a match exists then the software on the server retrieves information and sends it back to the user's phone.
The advantage with video is that if you have a "bad angle" you have another image for comparison supplied "a few milliseconds later", according to Linaker.