X
Tech

Singularity Summit 2007: Machine morality

Along with artificial intelligences comes the notion of machine morality, attempting to view intelligent machines with the best of human values. Machine morality--also know as friendly AI, artificial morality, roboethics and computational ethics--is an new field without much ground cultivated, according to Wendell Wallach, Lecturer at Yale Interdisciplinary Center for Bioethics, who was speaking at the Singularity Summit 2007.
Written by Dan Farber, Inactive

Along with artificial intelligences comes the notion of machine morality, attempting to view intelligent machines with the best of human values. Machine morality--also know as friendly AI, artificial morality, roboethics and computational ethics--is an new field without much ground cultivated, according to Wendell Wallach, Lecturer at Yale Interdisciplinary Center for Bioethics, who was speaking at the Singularity Summit 2007.

The new field is not about implementing moral decision-making facilities in artificial agents. Wallach posed some of the core questions involved in the field, such as:

  • Do we need artificial morality agents, and if so when and for what?
  • Do we want computer to be ethical?
  • Whose morality and what morality?
  • How can we make ethics computable?

Various approaches to creating machines with moral intelligence have to have knowledge of the effects of actions in the world, estimating the sufficiency of initial information and psychological awareness. These are the kinds of squishy capabilities that humans are good at, Wallach said. Human beings are biochemical, and instinctual, emotional platform; computers are a logical platform.

Wallach suggested that a "calculated morality" could provide computers with advantages, looking at more options or branches than the human brain can,

and it may select better responses. Machines might alos have the advantage of an absence of base emotions, like greed, but they could be programmed into systems.

Wallach also asked, "Is the absence of a nervous system subject to emotional hijacking a moral advantage?" He suggested that our unconscious emotional responses may have a lot to do with human reason. Emotions, sociability, embodiment of the world, empathy, consciousness and theory of mind are all part of the unique human experience that renders moral decision, Wallach said.

The software programming of those human facets will be extremely challenging, as well as facilitating trust in machines. Whether machines have the full array of human faculties is an open question.

More near term, Wallach made a prediction:

"We are just a few years away from a catastrophic disaster from an autonomous computer system making a decision." It will elicit a response similar to 9/11, he added. "We should not underestimate the political consequence of fear [of AI systems]."

This kind of event wouldn't likely stop scientific research, but it could slow it down, Wallach said.

"We need a mechanism for evaluating when thresholds that hold dangers have or are about to be crossed, and helping public policy leaders and the public at large to discrimatine real challenges from highly speculative challenges," Wallach said.

He also advised those going after funding for AI research to not make big promises. "If you over-promise, you'll also likely feed the fears."

Editorial standards