Research into drugs development, health, engineering, astronomy and a range of other disciplines could be accelerated thanks to a new project, CAVE2, underway at Monash University.
The Cave Automatic Virtual Environment 2 is a powerful tool that enables researchers to visualise data sets ranging from tiny molecules in drugs research through to massive cross-sections of the universe.
Cave2 — analogous to the 21st Century version of a microscope's viewfinder — consists of a room of 80 digital screens used to display 3D images that create the illusion that researchers are walking around inside the image and can be 'played' forward or backward over time, adding a fourth dimension.
According to Monash senior research fellow, Dr David Barnes, CAVE2 will allow researchers to gain new understandings into how rocks undergoing carbon sequestration respond over time, for example, or how particular areas of the brain change over time when performing memory tasks.
"A lot of science these days collects very large data sets, so the first application of the CAVE is that it ramps up the amount of information that can be displayed at once, and be gotten into a human brain for comprehension and learning," he told ZDNet.
"For the[radio astronomy project], one of the major areas of research is what are called transients — things that turn on and off, like radio pulsars or gamma ray bursts — all of those domains of science are generating large time-based data sets and we see the CAVE as a new facility at Monash for starting to digest and understand those data sets visually."
Using the human eye and brain to assess data sets — rather than using algorithms to find known connections in a data set — might lead to new connections being discovered that would otherwise have been missed, Barnes said.
"We want to find the key applications that will lead to major discoveries, major papers, Nobel prizes and so forth," he said.
According to Barnes, the first iteration of the CAVE concept was pioneered by Electronic Visualization Laboratory (EVL) at the University of Illinois at Chicago almost two decades ago, and used rear-projection screens to achieve a sense of immersion.
In contrast, the technology supporting CAVE2 – which was designed by teams from Monash, EVL and Dell — is largely based on clusters of Dell workstations. Each rack-mounted workstation is equipped with two Quadro K5000 graphics cards — one for rendering the image and another to provide GPU-based post-processing on the fly. Each workstation powers four 3D screens.
Combined, the cluster of workstations provides some 90 teraflops in the computing power, or more than one teraflop per screen. A 10gbps network, upgradable to 20gbps, connects the workstations and screens, and a 60gbps link connects the workstations to a local storage server.
Rather than rely on touch-screen to manipulate images, 14 cameras arranged throughout the CAVE focus on the 3D glasses worn by the system's primary user, allowing a displayed image to be re-rendered depending on the direction the user is looking in.
"We didn't go with touch screens as it is very hard to get touch screen in 3D screens of this size," Barnes explained. "Also, touch doesn’t make a lot of sense for an image in this mode — the brain will just get confused."