MPEG-4, VRML 2.0 breathe life into avatars

The VRML dream of a worldwide standard for multi-user virtual environments has come a step nearer.

Between now and September 24, the VRML/MPEG-4 liaison WG working group is to raise a list of requirements for the MPEG committee, from the VRML community. The working group was formed in the aftermath of SIGRAPH 97 (Special Interest Group on Computer Graphics).

MPEG (Moving Picture Expert Group) is a working group of the ISO (International Standards Organisation) in charge of the development of international standards for compression, decompression, processing, and coding the representation of moving pictures, audio and their combination. Currently MPEG-2 consists of compression for video, but MPEG-4 will include specialised routines for face encoding, body movement encoding, and 3D data. According to Rob Koenen author of the >MPEG-4 Overview, it is already "heavily influenced by VRML".

MPEG-4 is to be released in November 1998 and will become an International Standard in January 1999.

VRML (Virtual Reality Modeling Language) has evolved as the de facto standard for the transmission of 3D data over the Internet. VRML has progressed from describing static 3D scenes, to the description of animations and sound. The next step is to standardise avatar support (multi-user representation). These plans centre around the need for common protocols for carrying voice, face and body data.

This overlap has raised the prospect of a joint standard. For the end-user this will bring a multi-user environment, in which face, voice and body motion can be streamed, whether for a worldwide interactive presentation, or even for deathmatch Quake where the players' voices and facial expressions are represented in the game.