A small Californian start-up company plans to launch a "light field" camera that will let you refocus images after you have taken them. Users will be able to shift between having a sharp foreground and a blurry background, or vice versa, or having everything sharp. The Lytro camera -- which the company says will be competitively priced and fit in your pocket -- will also be capable of creating 3-D images. Although the idea is aimed at consumers, it will find many professional uses, especially in scientific, medical and surveillance applications.
According to one of the company's backers, Andreessen Horowitz: "Lytro’s breakthrough technology will make conventional digital cameras obsolete. It has to be seen to be believed." Venture capitalist Marc Andreessen is best known as co-founder of Netscape, and has helped Lytro raise around $50 million.
Lytro was launched yesterday (June 21) as a new company. It has put a Gallery of light field images on its website so that visitors can experiment with changing the focus of various pictures. This requires Adobe Flash.
Whenever such a breakthrough is announced, especially in an area as old as photography, it's tempting to dismiss it as a hoax. In this case, however, Lytro is based on technology developed over a long period by Mark Levoy's group at Stanford University. Lytro, which was originally called Refocus Imaging, was founded by one of Stanford's star students, Ren Ng, who earned his PhD for light field research. With Levoy and others, he wrote a technical report, Light Field Photography with a Hand-Held Plenoptic Camera. The abstract says:
This paper presents a camera that samples the 4D light field on its sensor in a single photographic exposure. This is achieved by inserting a microlens array between the sensor and main lens, creating a plenoptic camera. Each microlens measures not just the total amount of light deposited at that location, but how much light arrives along each ray. By re-sorting the measured rays of light to where they would have terminated in slightly different, synthetic cameras, we can compute sharp photographs focused at different depths. We show that a linear increase in the resolution of images under each microlens results in a linear increase in the sharpness of the refocused photographs. This property allows us to extend the depth of field of the camera without reducing the aperture, enabling shorter exposures and lower image noise. Especially in the macrophotography regime, we demonstrate that we can also compute synthetic photographs from a range of different viewpoints. These capabilities argue for a different strategy in designing photographic imaging systems.
The plenoptic camera idea comes from work done at the MIT Media Lab in 1992 by John Wang and Edward Adelson. In 2005, they published a paper on the Plenoptic Camera and its Applications (PDF) which has diagrams that show how the system works. The paper shows Ng with his new prototype light field camera: a 16-megapixel Contax 645 with added microlenses. The ultimate resolution depends on the micreolenses.
Early light field research was done with multiple cameras and used supercomputers to do the image processing, which is based on fourier slicing. What's new is that cheap microprocessors are becoming fast enough to do real image processing, and will certainly get faster in the future.
Can light field photography become ubiquitous? As someone who has been in the field for decades, I don't see why not. Consumers find it perfectly normal to focus an image on an LCD camera screen before they take a picture. Other things being equal, it could soon seem perfectly normal to refocus the image afterwards. Perhaps the real issue is how much they will be willing to pay for it.
Note: The Wall Street Journal's All Things D blog has a brief video interview with Dr Ng, Camera Start-Up Offers a Whole New Perspective