The Ghost in Your Machine

Computers may soon monitor your work, notice when fatigue sets in, and fix mistakes. Scary? No more than a good secretary, says researcher Chris Forsythe.
Written by Patricia O'Connell, Contributor

The world of smart computers -- machines that would be familiar with your habits and know when you're stressed or fatigued -- could be only a few years away. The computers would note your mental logic for saving information and follow the same logic in saving files. They would accurately infer your intent, remember past experiences (for instance, that you tend to make errors in multiplication), and alert you to mistakes.

These so-called cognitive machines -- essentially, smart software that can be part of any computer environment -- are already here in prototype, having been developed over the past five years by a team of computer scientists and cognitive psychologists at the Energy Dept.'s Sandia National Laboratories. The software monitors everything you do and creates a mathematical model of your behavior, such as your patterns in saving information or doing your work. Think of it as an advanced cousin of today's software which, after you've typed in a few letters of someone's address in an e-mail, suggests the rest.

At their most benign, smart computers seem like executive secretaries for those of us who can't afford one -- offering tremendous advances in productivity. Yet some fear that the concept suggests an ominous encroachment out of a sci-fi movie. Cognitive psychologist Chris Forsythe, who leads the Sandia team, insists that the machines are designed to augment -- not replace -- human activity. "We don't want to take the human out of the loop," he says. The simplest versions of these cognitive machines could hit production in as little as one to two years.

Forsythe talked to BusinessWeek Online Reporter Olga Kharif on Aug. 19 about how cognitive machines will change our world. Edited excerpts of the interview follow.

Q: How would you characterize the current state of human-machine interaction?
A: The biggest problem is that if you're the user, for the most part the technology doesn't know anything about you. The onus is on the user to learn and understand how the technology works. What we would like to do is reverse that equation so that it becomes the responsibility of the computer to learn about the user.

The computer would have to learn what the user knows, what the user doesn't know, how the user performs everyday, common functions. It would also recognize when the user makes a mistake or doesn't understand something.

Q: Could you give me an example of a prototype of a system that you've already built?
A: One of the systems that we built last year has a function called discrepancy detection. We give the machine a cognitive model of an air-traffic controller. You have an operator watching events going on in the world around him, and the computer is sitting there "watching" all the same things the operator sees and is attempting to interpret, using the operator's cognitive model -- essentially, a mathematical model of the user's behavior -- what's going on.

Thanks to our software, when you stop the simulation and ask the computer and the operator, "What do you think is going on right now?" about 90% of the time you get the same answer from both. Such a computer could alert the operator to a problem the operator hasn't picked up on yet.

Editorial standards