A computer science professor hopes to use open-source software and super-high resolution photos to capture three-dimensional lifelike models of the world's treasures, effectively preserving their current state.
Professor Pedro Sander in front of the 150 billion-pixel photo (Credit: Darren Pauli/ZDNet Australia)
Under the plans, a sequence of many thousands of super-high resolution photographs taken in batches from several angles would be stitched together to form detailed pictures and then rendered into 3D form.
The effect would reveal minute detail of an object rendered in 3D, allowing future generations to view objects in their present state.
The technology builds on the work of Hong Kong University of Science and Technology Professor Pedro Sander, who, with a team of PhD students, stitched together the world's largest photograph at a staggering 150 billion pixels.
He used a $1200 camera, an 800-millimetre lens, a robotic arm and a free open-source application that combined some 11,000 18-megapixel images.
"For historical purposes, we can capture say a statue as it is in this point of time so if there is a change in the world and the statue is gone, generations will be able to see it as it was," Sander told ZDNet Australia.
"It will show a snapshot of the city in a point in time. You will be able to zoom in and see what people are doing, how they lived. This is our final goal.
"You could imagine the uses for the human body, to see a stitched representation of organs, or detail of the skin."
He may also create a 3D picture — the type viewed with red/green glasses — which would mesh two images taken from adjacent angles.
It will also help tourists see a city in detail previously never possible — a capability Sander agrees may peak the interest of the likes of Google.
The 150 billion-pixel photo was shrunk down to a small image to allow students to manually smooth out the brightness variation between the combined photos using Adobe Photoshop. The changes took about three weeks, then were mapped onto a larger image.
The 700GB photo took about a week to upload to the internet, and was processed with a standard PC beefed up with 24GB of RAM which allowed the individual pictures to be processed simultaneously.
Sander's forte is geometry processing; his work and that of his five colleagues and about 35 students within the images and graphics department covers an impressive gamut of technology.
Cameras set in a circle for 3D modelling (Credit: Darren Pauli/ZDNet Australia)
- Better gaming: smart rendering designed by Sander, a former designer with graphics house ATI, is used in some modern best-sellers like Gears of War 2 and titles from Valve. The algorithm determines components of graphics that do not need re-rendering, such as shadows, and carries them forward through frames, freeing up computer resources to work on improving graphic quality.
- Chinese calligraphy: software that allows a digital scratch pad to mimic Chinese calligraphy, rendering a natural ink flow as an artist makes strokes with a stylus. It was used to design art for the 2008 Olympics in Beijing.
- 3D models: a program that maps 2D images based on correspondence points to determine depth and build a 3D model image.
- Photos restored: combining blurry and noisy images to recover a perfect photo. The software estimates motion in the blur to trace back to the original.
- High-res movies: enlarging small resolution video clips without quality loss, by using software that estimates movement and colour of objects.
- Clear maps: software that amalgamates mapping lines, such as migration paths, to be more readable and detailed to the human eye.