X
Tech

Intelligent machines threaten humankind

Dystopia or utopia: There may be a calamitous menace hidden behind the glorious possibilities of artificial intelligence
Written by Will Knight, Contributor

Science fiction has portrayed machines capable of thinking and acting for themselves with a mixture of anticipation and dread, but what was once the realm of fiction has now become the subject of serious debate for researchers and writers.

Stanley Kubrick's groundbreaking science fiction film 2001 shows HAL, the computer aboard a mission to Jupiter deciding (itself) to do away with its human copilots. Sci-fi blockbusters such as The Terminator and The Matrix have continued the catastrophic theme portraying the dawn of artificial intelligence as a disaster for humankind.

Science fiction writer Isaac Asimov anticipated a potential menace. He speculated that humans would have to give intelligent machines fundamental rules in order to protect themselves.


  • A robot may not injure a human being or, through inaction, allow a human being to come to harm

  • A robot must obey orders given it by human beings except where such orders would conflict with the First Law
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

  • Later Asimov added a futher rule to combat a more sinister prospect: "A robot may not injure humanity, or, through inaction, allow humanity to come to harm."

    Will machines ever develop intelligence on a level that could challenge humans? While this remains a contentious question, one thing is certain: computing power is set to increase dramatically in coming decades. Moore's Law, which states that processing power will double every 18 months, is set to continue for at least the next ten years, and quantum computers, though poorly understood at present, promise to add new tools to AI that may bypass some of the restrictions in conventional computing.

    What was once the realm of science fiction has mutated into serious debate. While the focus is currently on cloning and genetic engineering, few people have seriously considered being annihilated by a robot race.

    That is until an article published in Wired magazine early last year titled Why the future doesn't need us by cofounder of Sun Microsystems and esteemed technologist Bill Joy introduced a wider audience to the possibility that recent technological advances could be a threat to the existence of man. Joy discussed the potential catastrophes that could result from tinkering with genetics, nanotechnology and artificially-intelligent machines.

    Most disturbingly, Joy cites not technophobes or paranoid theorists, but some of the leading lights of AI research and academia who have voiced concern that machines might confront humans.

    Steve Grand, artificial intelligence researcher and author of Creation: Life and how to make it says it would be impossible for humans to be totally sure that autonomous, intelligent machines would not threaten humans. Perhaps more worryingly, he claims it would be futile to try to build Asimov's laws into a robot.

    Artificial intelligence researchers have long since abandoned hope of applying simplistic laws to protect humans from robots. Grand says that for real intelligence to develop, machines must have a degree of independence and be able to weigh up contradictions for themselves, breaking one rule to preserve another, which would not fit with Asimov's laws. He believes that conventional evolutionary pressures would determine whether machines become a threat to humans. They will only become dangerous if they are competing for survival, in terms of resources for example, and can match human's intellectual evolutionary prowess.

    "Whether they are a threat rests on whether they are going to be smarter than us," he says. "The way I see it, we're just adding a couple more species."

    In his book The End of the World: The Science and Ethics of Human Extinction John Leslie, professor of philosophy at Guelph University in Canada, predicts ways in which intelligent machines might cause the extinction of mankind. He says that super-clever machines might argue to themselves that they are superior to humans. They might eventually be put in charge of managing resources and decide that the most efficient course of action is for humans to be removed. He also believes it would be possible for machines to override in-built safeguards.

    "If you have a very intelligent system it could unprogram itself," he says. "We have to be careful about getting into a situation where they take over against our will or with our blessing."

    Even if there exists a distant danger, some experts say it is much too soon to start panicking. Rodney Brooks, director of the Artificial Intelligence Laboratory at Massachusetts Institute of Technology (MIT) says we can't hope to accurately imagine how things may pan out just yet. "I think that this is a little like worrying about noise abatement issues at airports back during mankind's first attempts at a hot air balloon," he says.

    Ray Kurzweil, author of The Age of Spiritual Machines: When Computers Exceed Human Intelligence, also believes it is possible to overreact to a vision of robotic Armageddon and says the potential benefits make it impossible to turn our backs on the benefits of artificial intelligence.

    "People often go through three stages in examining the impact of future technology," says Kurzweil in an article responding to Bill Joy's polemic, titled Promise and Peril: Deeply Intertwined Poles of Twenty First Century Technology. "Awe and wonderment at its potential to overcome age old problems, then a sense of dread at a new set of grave dangers that accompany these new technologies. Followed, finally and hopefully, by the realisation that the only viable and responsible path is to set a careful course that can realise the promise while managing the peril."

    Surprisingly, there are even experts who would welcome the possibility of machines taking over from humans. Professor Hans Moravec is well known for his belief that machines will inherit the earth -- he even welcomes the prospect. Moravec said in a recent interview that the majority of significant human evolution has taken place on a cultural level and therefore replacing biological humans with mechanical machines capable of far greater learning and cultural development is the next logical step in evolution.

    So what may be the best course of action? Marvin Minsky is an artificial intelligence pioneer who founded the AI Lab at MIT and is on the board of advisors at the Foresight Institute, a body created to investigate the dangers of emerging technologies. Minsky agrees that extinction at the mechanical hands of a robot race may be just around the corner, but says that developments in the field of artificial intelligence call for considered debate. He says he is encouraging artificial intelligence experts to participate in the work of the Institute.

    "Our possible futures include glorious prospects and dreadful disasters," says Minsky in an email. "Some of these are imminent, and others, of course, lie much further off."

    Minsky notes that there are more immediate threats to think about and combat, such as global warming, ocean pollution, war and world overpopulation. However, he says, the possibilities of artificial intelligence should not be completely ignored.

    "In a nutshell, I argue that humans today do not appear to be competent to solve many problems that we're starting to face. So, one solution is to make ourselves smarter -- perhaps by changing into machines. And of course there are dangers in doing this, just as there are in most other fields -- but these must be weighed against the dangers of not doing anything at all."

    Minsky adds a warning for those who question whether machines may ever become intelligent enough to better us. "As for those who have the hubris to say that we'll 'never' understand intelligence well enough to create or improve it, well, most everyone said the same things about 'life' -- until only a half dozen decades ago."

    In ZDNet's Artificial Intelligence Special, ZDNet charts the road to sentience, examines the technologies that will take us from sci-fi to sci-fact, and asks if machines should have rights.

    Have your say instantly, and see what others have said. Click on the TalkBack button and go to the ZDNet News forum.

    Let the editors know what you think in the Mailroom. And read what others have said.

    Editorial standards