X
Tech

Google, MIT's AI instantly fixes your smartphone snaps as you shoot

Google's new neural network can learn to edit like a human.
Written by Liam Tung, Contributing Writer

Retouching smartphone snaps after taking them could soon be a thing of the past, thanks to new computational photography techniques developed by Google.

Google has produced a new image-processing algorithm that builds on a cloud-based system for automatically retouching images developed by MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL).

MIT's system, developed in 2015, sent a low-res image for processing in the cloud that returned a tailored 'transform recipe' to edit the high-res image stored on the phone.

Using machine learning to train a neural network to do what MIT's system did in the cloud, Google's image algorithm is efficient enough to move this processing to a phone to deliver a viewfinder image within milliseconds.

The work is presented in a joint paper by Google and MIT researchers, describing an algorithm that "processes high-resolution images on a smartphone in milliseconds, provides a real-time viewfinder at 1,080p resolution, and matches the quality of state-of-the-art approximation techniques on a large class of image operators."

Image operators handle tasks such as selfie enhancements, filters, image slicing, color correction, and so on.

Apple, Microsoft, Google, and others are already using computational photography to improve the quality of snaps, despite hardware constraints.

The iPhone's dual-camera module, Microsoft's Pix app, and Google's Pixel HDR+ are all examples of computational photography at work, which rely on algorithms on the device to make image improvements.

However, as the paper notes, HDR+ is an example of a programmatically-defined image operator. The Google and MIT neural network is capable of reproducing HDR+ and several other operators.

Google tested its technique on a Pixel phone and managed to render 1,920x1,080 images into a final processed preview within 20 milliseconds. It also scales linearly, so a 12-megapixel image took 61 milliseconds to process.

Google sees potential for the new algorithm to deliver real-time image enhancements with a better viewfinder and less impact on the battery.

"Using machine learning for computational photography is an exciting prospect but is limited by the severe computational and power constraints of mobile phones," Google researcher Jon Barron told MIT News.

"This paper may provide us with a way to sidestep these issues and produce new, compelling, real-time photographic experiences without draining your battery or giving you a laggy viewfinder experience."

READ MORE ON MOBILITY

Editorial standards