X
Tech

Can 'friendly' AI save humans from irrelevance or extinction?

The fate of the human species depends on AI (Artificial Intelligence) entities far smarter than us and who aren't prone to wipe out or enslave us. That is one of the topics to be discussed by luminaries in the AI world at the Singularity Summit 2007 held at the Palace of Fine Arts in San Francisco September 8-9.
Written by Dan Farber, Inactive
singsummit.jpg
The fate of the human species depends on AI (Artificial Intelligence) entities far smarter than us and who aren't prone to wipe out or enslave us. That is one of the topics to be discussed by luminaries in the AI world at the Singularity Summit 2007 held at the Palace of Fine Arts in San Francisco September 8-9.

I spoke with Eliezer Yudkowsky, co-founder of the Singularity Institute for Artificial Intelligence about his idea of Friendly AI and the challenges to achieving self-reflective AI systems far beyond the capacity of human intelligence. It is the stuff of science fiction, yet our ancestors from 10,000 years ago with the same grey matter would have never dreamed of people on the moon or the iPhone. (You can download the podcast here.)

Yudkowsky prefers the idea of an "Intelligence Explosion" to "Singularity," but the resultant issues are similar. In 1965, statistician I. J. Good surmised an Intelligence Explosion, where machines surpass human intellect and can recursively augment their own mental abilities beyond their creators':

"Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make."

Science fiction author and mathematician Vernor Vinge wrote about Singularity in 1993:

"We are on the edge of change comparable to the rise of human life on earth. The precise cause of this change is the imminent creation by technology of entities with greater than human intelligence."

More recently futurist and inventor Ray Kurzweil defined singularity as an "era in which our intelligence will become increasingly nonbiological and trillions of times more powerful than it is today—the dawning of a new civilization that will enable us to transcend our biological limitations and amplify our creativity."

Kurzweil predicts that by 2029 $1,000 of computation will equal 1,000 times the human brain and that non-biological intelligence will continue to grow exponentially whereas biological intelligence is effectively fixed.

eliezer.jpg
Yudkowsky gets into the "practical" side of the singularity and the intelligence explosion with his notion of Friendly AI. As you'll hear in the podcast, he believes that the key to the future is creating self-improving AI that is stable and engineered as a benevolent and humane, ethically optimized by humans for humans.

"The mission is to reach into the space of possible minds and pluck out a good one," he said. He admits that plucking out a good one, and 'programming' the behavior of systems that modify themselves, is a extremely difficult challenge.

Yudkowsky isn't predicting when the self-improving, higher intelligence AI might appear. He is working on the approaches and math that would allow an AI system to see undesirable modifications as undesirable. Given human nature, it's hard not to imagine a future with sectarian AIs engaged in virtual wars, with humans caught in the middle. I guess I have seen too many movies like iRobot...but at least humans eventually triumph in American cinema.

See also: Barney Pell: Pathways to artificial intelligence

Steve Jurvetson: AI, nanotech and the future of the human species

Steve Omohundro: Building self-aware AI systems

Editorial standards