You probably are using today a camera with an auto focus feature and chances are good that you are satisfied with your shots. Except that once the camera has made a choice to focus on a particular element, the picture is taken. And sometimes, it's not the decision you would have taken. According to LinuxElectrons, Texas, a team of computer scientists from Stanford University has developed a camera which might help. By inserting a microlens array between the main lens and the photosensor, their "light field camera" takes one shot, but also captures information about light conditions. And you can later compute "photographs in which subjects at every depth appear in finely tuned focus." However, his new technology will probably appear for applications such as security surveillance and commercial photography before landing in your personal camera.
Here are some quotes from the LinuxElectrons article.
Ren Ng, a computer science graduate student in the lab of Pat Hanrahan, the Canon USA Professor in the School of Engineering, has developed a "light field camera" capable of producing photographs in which subjects at every depth appear in finely tuned focus.
"Currently, cameras have to make decisions about the focus before taking the exposure, which engineering-wise can be very difficult," said Ng. "With the light field camera, you can take one exposure, capture a lot more information about the light and make focusing decisions after you've already taken the shot. It is more flexible."
Below is a photo of a group portrait which shows what a conventional camera would have produced -- with a focus on the center of the image (Credit: Ren Ng).
This second one has been computed after a single exposure of this prototype of light field camera -- also named a plenoptic camera -- and digitally refocused at a different depth (Credit: Ren Ng).
How does this "light field camera" work? LinuxElectrons has the answer -- and other details.
The light field camera adds an additional element -- a microlens array -- inserted between the main lens and the photosensor. Resembling the multi-faceted compound eye of an insect, the microlens array is a square panel composed of nearly 90,000 miniature lenses. Each lenslet separates back out the converged light rays received from the main lens before they hit the photosensor and changes the way the light information is digitally recorded.
Custom processing software manipulates this "expanded light field" and traces where each ray would have landed if the camera had been focused at many different depths. The final output is a synthetic image in which the subjects have been digitally refocused.
For more information, the researchers have published a paper about their new camera, "Light Field Photography with a Hand-Held Plenoptic Camera." Here is the first paragraph.
This paper presents a camera that samples the 4D light field on its sensor in a single photographic exposure. This is achieved by inserting a microlens array between the sensor and main lens, creating a plenoptic camera. Each microlens measures not just the total amount of light deposited at that location, but how much light arrives along each ray. By re-sorting the measured rays of light to where they would have terminated in slightly different, synthetic cameras, we can compute sharp photographs focused at different depths. We show that a linear increase in the resolution of images under each microlens results in a linear increase in the sharpness of the refocused photographs.
Here is a link to the full report (11.1 MB) and other resources.
I don't know when this technology becomes available, but the good thing is that these new cameras will work exactly as today's ones -- except that they'll do more things.
Sources: Robert Thomas, LinuxElectrons, Texas, November 1, 2005; and various web sites
You'll find related stories by following the links below.