When you take a picture with your brand new digital camera, you're sometimes disappointed by the results and you want to modify them. And these digital effects have to be done manually. So when I read a short article from The Engineer Online about a software tool developed by a team of German researchers at the Max Planck Institute which promises to automate this process, I wanted to know more. In fact, this software allows 'users to exchange or animate faces in images in an almost completely automated way.' For example, you can replace Mona Lisa's face with the one of your girlfriend. Or you can try how you would look with a variety of hairstyles. Very handy!
In order to learn more about this software tool, here is a link to a Max Planck Institute. Here is the introduction.
In Computer Graphics, more and more applications require digital effects within existing images or video material, rather than creating virtual scenes entirely. Currently, tasks such as exchanging or animating faces in images are usually done manually with software for digital photo editing, which follows the same principle as conventional photographic print retouching: color values in images are changed locally in each point, and image regions are copied from one image into another. This is very tedious and requires the skill of an artist.
So the researchers, and Volker Blanz in particular, decided to develop a new method for exchanging or animating faces in images.
For the user, the new editing paradigm is no longer based on points (pixels), but on high level descriptions such as "Person A", "Person B", and "Smile". Most importantly, our algorithm can insert a face from a given viewing direction into images at any other viewing directions and illuminations, which was not possible with previous techniques.
Below is a diagram showing how an image can be transferred and inserted into another one, such as Mona Lisa smiling (Credit: Volker Blanz, Max Planck Institute).
And here are two more examples of famous paintings getting new faces (Credit: Volker Blanz, Max Planck Institute).
Here is a short explanation of how this technology works.
Given an image of a face, our algorithm computes the linear combination of example faces and the scene parameters that fit the input image best in terms of point-by-point image difference. In an analysis-by-synthesis loop, the algorithm draws a synthetic image (rendering operation R, with scene parameters rho), compares the result with the input, and updates the face model and scene parameters. Mathematically, this is a non-linear optimization problem. In order to help the system to find the face in the image, the user has to click on 7 feature points, such as the eyes and the nose.
In order to draw a face from one image into another, the 3D reconstruction algorithm is applied to both of them. Then, the 3D face from the source image is drawn with the pose and illumination parameters that were estimated from the other. The background behind the 3D face is the original target image. In this background, the image structures around the face are extended into the face region across the silhouette of the original face, which is important if the new face is smaller than the original. Strands of hair that cover the face have to be processed manually, and are drawn as a front layer for any new face.
Obviously, this software tool can be used in a variety of photo-editing applications, but it also could be used for automated face recognition.
If you're interested in this -- fascinating -- subject, you also should read a technical paper named -- guess -- "Exchanging Faces in Images," published in the 'Proceedings of EUROGRAPHICS 2004' (PDF format, 8 pages, 6.28 MB). This paper contains other spectacular images, such as the insertion of 'normal' people in the poster for the movie 'Gone xith the wind.'
Sources: The Engineer Online, November 22, 2005; and various pages at the Max Planck Institute
You'll find related stories by following the links below.