X
Business

Our next teachers: avatar experts

Researchers from Illinois and Florida are developing a networking system which will create virtual representations of real people to improve our knowledge. They will use artificial intelligence and natural language processing software to enable us to interact with these avatars. The goal of the project, sponsored by the National Science Foundation (NSF), is to give us the possibility to interact with these virtual representations as if they were the actual person, complete with the ability to understand and answer questions. We should see the results at the beginning of 2008 -- if the researchers succeed.
Written by Roland Piquepaille, Inactive

Researchers from Illinois and Florida are developing a networking system which will create virtual representations of real people to improve our knowledge. They will use artificial intelligence and natural language processing software to enable us to interact with these avatars. The goal of the project, sponsored by the National Science Foundation (NSF), is to give us the possibility to interact with these virtual representations as if they were the actual person, complete with the ability to understand and answer questions. We should see the results at the beginning of 2008 -- if the researchers succeed.

This has been a collaborative project between the Intelligent Systems Laboratory (ISL) at the University of Central Florida in Orlando (UCF) and the Electronic Visualization Laboratory (EVL) at the University of Illinois at Chicago (UIC). The EVL team is in charge of computer graphics and avatar development while the ISL team will focus on artificial intelligence and natural language processing software.

In order to see the degree of realism that the researchers want to obtain, below are some pictures of UIC researchers and their avatars (Credit: EVL). Here is a link to the original version.

UICresearchers avatars

As summarizes the principal investigator for this project, Jason Leigh, the director of EVL, "The goal is to combine artificial intelligence with the latest advanced graphics and video game-type technology to enable us to create historical archives of people beyond what can be achieved using traditional technologies such as text, audio and video footage."

How will the team reach this goal?

EVL will build a state-of-the-art motion-capture studio to digitalize the image and movement of real people who will go on to live a virtual eternity in virtual reality. Knowledge will be archived into databases. Voices will be analyzed to create synthesized but natural-sounding "virtual" voices. Mannerisms will be studied and used in creating the 3-D virtual forms, known technically as avatars.

And guess who will be the first teacher? A senior NSF program manager.

The project's test subject will be a senior NSF program manager known for his wealth of institutional knowledge. A UIC graduate student will shadow this official for several months making video and voice recordings. His presence will be digitally reconstructed and interviews used to glean his institutional insights will be stored in the information database. It will allow NSF personnel to consult his virtual counterpart whenever they want to tap his institutional wisdom.

But don't worry! There will be plenty of virtual teachers in the months and years to come, if this NSF project is successful. For more information, you should read what NSF writes about its grant awarded for the project named Towards Life-like Computer Interfaces that Learn. Here is an excerpt.

This collaborative project, developing and evaluating lifelike, natural computer interfaces as portals to intelligent programs in the context of Decision Support System (DSS), aims at providing a natural interface that supports realistic spoken dialog, non-verbal cues, and the capability of learning to maintain its knowledge current and correct. The research objectives focus around the development of an avatar-based interface with which the DDS user can interact. Communication with the avatar takes place in spoken natural language combined with gesture expressions or by pointing on the screen. The system supports speaker-independent continuous speech input as a spontaneous dialog within the specified DSS domain. A robust backend that can respond intelligently to the questions asked by the DDS user is expected to generate the responses spoken in reply by the avatar with realistic inflection and visual expressions.

The idea is bright, but will it have a commercial market? We'll see that next year.

Sources: University of Illinois at Chicago news release, March 12, 2007; and various websites

You'll find related stories by following the links below.

Editorial standards