X
Tech

Yale bioethicist warns of singularity's perils at futurist gathering

The World Future Society's yearly confab got underway last night in Boston with a keynote from Wendell Wallach, a bioethicist, lecturer and scholar at Yale University.
Written by Chris Jablonski, Inactive

Last night, the World Future Society's yearly confab got underway in Boston with a keynote from Wendell Wallach, a lecturer and scholar at Yale University. Judging by the audience of several hundred, the topic of artificial intelligence, the singularity, and their societal implications are of interest across all demographics.

Wallach is a pioneer in the nascent field of robot ethics and has captured the imaginations of futurists with his theories on artificial moral agents and computational ethics.  In fact, he designed the world's first course on the subject at Yale, and he published a book last year entitled, Moral Machines: Teaching Robots Right from Wrong.

Wallach immediately engaged the future-hungry audience with a video clip from the opening scene of 2001: A Space Odyssey depicting man's upright strife with a bone.  For better and for worse, man will shape technology, and in turn, it will shape man.  He said to get ready for a "wild roller coaster ride through emerging technologies." And he delivered.

He rounded it out to a nice list of 10:

One of the first among a barrage of points Wallach made in his keynote was that Homo sapiens were not the first toolmakers, that honor goes to Homo habilis.  Additionally, we're not the only ones to use tools pointing to the several kinds of animals that also use them. The important issue for us, however, is that we're seeing the beginning of a co-evolution between human beings and their technologies. We now evolve culturally and are as much as product of our culture as our biology.

According to Wallach, life expectancy is growing at a rate of 1 year every 10 years. He asks, "What if we doubled life expectancy? What would the societal impact be?"  We're now attacking death as a disease and the battle can be traced back to the in the mid-1800s when germ theory came into the fore.  A simple solution to germs is to use ordinary bar soap he remarked to the crowd's delight.

Wallach then moved on the singularity. For the uninitiated, the term singularity is borrowed from physics and in this context means the point 20, 30 or 100 years out (depending who you ask) at which technological progress will enable computers to reproduce the same level of intelligence as humans. "The change would be dramatic that it has to be called the singularity," he said.

Wallach is not entirely bought into the idea, however, calling himself a friendly skeptic. "We are far from understanding human intelligence and the qualities to pull this off."  He then proceeded to parse the topic into three areas: complexity, thresholds, and societal/ethical implications.

Reaching the computational ability of the human brain is within sight, but there are other things about the brain that can't be overlooked. For instance, it is engaged in massive parallel processing and extensive looping, and we don't know how it self-organizes. If you damage the brain, there is limited degradation, but with a desktop, if one bit is out of place, your computer locks up.

The thresholds to human-level computer intelligence include things like vision, language, and locomotion, which are well on their way, said Wallach.  But there's a science of consciousness that has emerged.  "Why do we experience anything at all? You need to be conscious to know semantics. We don’t understand if consciousness is unique to carbon-based systems," he explained.

Also consider that humans are comprised of thousands of interlinked subsystems which are required for us to function the way we do.  If not, we have mental breakdowns and get disease.  Can you replicate this kind of complexity with a computer?

The next segment in the presentation focused on where are we today with computers and how we'll progress from here to the singularity.

Computers today are limited to specific functions. But researchers are now mapping a new field of inquiry into Artificial Moral Agents, the implementation of moral decision-making faculties in artificial agents so that they have basic ethical sensitivity.

"We are in the midst of huge change," he said, pointing to the robotization of the military. "By 2050, 1/3 of all ground and air vehicles will be unmanned...is this a good idea or a recipe for future disasters?

These machines have operational morality. In other words, they have values of the designs or corporations that build them. But we are moving into an area where robots need to evaluate decisions. There are three approaches:

Wallach discussed the role of ethical theory in defining the control architecture of robots, first by noting the shortcomings of Isaac Asimov's 3 laws of robots, quipping that they're a nice literary device. "Humans are a biochemical instinctual, emotional platform while computers are rational from the get go," he said.

Robots that take care of the elderly, for instance will need to use ethics and react to say the terror of a patient, maybe caused by the robot itself.

Driver-less cars sounds like a good idea and many people say it will solve many problems like traffic congestion and the death-rate on the road.  Although traffic deaths may decrease substantially, accidents will still occur, therefore, corporations will simply not build them because of the liability issue. They need insurance.

Wallach continued his 1.5 hour long address covering human enhancement technologies, saying that it's hard to separate the promise from the perils and bringing up the societal issues. "Is this an evolution or a devolution...do we really need to improve who we are."

He followed that with a risk assessment saying that the tools we have are very weak. As for the public policy challenge, Wallach said we are addressing these challenges piecemeal, and that may be ok, but the problem is that nobody asked if it really is ok.

The presentation concluded with a look at research ethics. He asked if we should lower or raise the barriers for using human subjects and if we should allow for enhancement research. "Our challenge is to find the middle way that works for all humanity," he said.

His final point: The more autonomy we put on the machine, the more responsibility is put on the human. So we are moving in a circle that is self-defeating.

Editorial standards