X
Innovation

Artificial Intelligence: Working backwards from HAL

Part 1: In the first part of a three-part special report looking at the past, present and future of AI, we examine the origins of machine intelligence and neural networks
Written by Nick Hampshire, Contributor

The phrase 'artificial intelligence' was first coined by John McCarthy at a conference at Dartmouth College, New Hampshire, in 1956, but the concept of artificial, or machine, intelligence is in fact as old as the computer. The computer was, after all, initially developed during the Second World War to break codes that were too hard for humans and required high speed 'machine intelligence'.

It was one of the most celebrated of the Second World War code breakers, Alan Turing, a man who many would describe as the inventor of the first modern computer, who proposed in 1950 what has become known as the Turing Test. This simply said that we could consider a machine to be intelligent if its responses in some sort of conversation were indistinguishable from those of a human. It is this proposal that is seen by many not only as the definitive test of machine intelligence but also the point at which today's quest to develop artificial intelligence was born.

Three Laws of Robotics
In the early days of computing there had already been a great deal of optimism that machines could be created that would behave intelligently. In 1942 Isaac Asimov put forward his three laws of robotics in the short story Runaround, which was later republished as part of the short story collection, I, Robot. Not long after the book was published, one of the fathers of computing, John von Neumann said, "You insist that there is something that a machine can not do. If you tell me precisely what it is that a machine can not do, then I can always make a machine that can do just that."

This optimism was fuelled over the next few decades by the constantly increasing power and speed of computer hardware and by the success in applying computers to an ever-wider range of human endeavours. Many believed that as the computational power of machines increased they would soon be able to equal the intellectual power of a human being.

It is now over fifty years since the birth of artificial intelligence research, computing power is both fast and cheap, and yet today intelligent machines seem to as far in the future as they were half a century ago. According to those early researchers we should now be surrounded by intelligent machines, is this the case or are we still waiting?

A long road to intelligence
Work on machine intelligence started with chess, and Maniac 1, the first chess program to beat a human player, was demonstrated in 1956 by Stanislaw Ulam at Los Alamos National Laboratory in the US. This was an early success in the quest for machine intelligence that started a long sequence of work on chess-playing computers by many researchers around the world.

In 1966 Joseph Weizenbaum at MIT developed the first computer program capable of engaging in a conversation with a human — Eliza. This clever program was able to hold a seemingly intelligent conversation with a human, and many felt that given enough computer power and a large enough vocabulary these algorithms would make it possible for a machine to meet Turing's test for intelligence.

Shakey, the first robot capable of locomotion, perception and problem solving was built at Stanford Research Institute, California, in 1969. This was followed in 1979 by the Stanford Cart, a computer controlled autonomous robot designed by Hans Moravec of Stanford university that was capable of successfully navigating around a room filled with furniture without bumping into any.

The success of these and other similar experiments in artificial intelligence gave researchers during the 1960s and 1970s the confidence that given enough computing power, and sufficient research funds, they would quite soon be able to develop an algorithm for...

For more, click here...

...intelligence. This was the era in which there was also much speculation about the impact of intelligent computers, computers like HAL in 2001: A Space Odyssey.

Fifth Generation project
In response to this high level of optimism Japan's Ministry of Information and Trade decided to push for a great leap forward, and announced in 1982 a project to develop massively parallel computers that would they believed make machine intelligence possible. This became known as the Fifth Generation project.

American government and business quickly responded by setting up the Microelectronics and Computer Technology Corporation (MCC), and pumping money for AI research into the Defense Advanced Research Projects Agency. This competitive atmosphere meant that over the next decade large amounts of money were poured into AI research in both the US and Japan.

This quickly led to a flood of new ideas, expert systems quickly became knowledge-based systems with the development of logic based on Bayesian probabilities that offered new ways to classify, store and use human knowledge. Early work on perceptrons developed into 'neural networks' that held the promise of being able to model biological neural structures that could not only function as pattern classifiers but could also learn. Search strategies were improved. The concept of intelligent agents was developed, and new learning strategies such as genetic algorithms were devised. There were also considerable advances in areas such as machine vision, natural language processing, and voice recognition.

The AI bubble bursts
The ultimate goal of all this research effort and expenditure, the creation of an intelligent machine eluded the researchers, and by the early 1990s it was starting to become clear that the hoped for great leap forward in AI was not going to happen as quickly as people had thought 10 years earlier. Government and corporate enthusiasm disappeared, funds started to dry up, DARPA withdrew most of its support, and research projects were shelved. This was the AI equivalent in the early 1990s of the dot-com bubble a decade later.

Researchers' failure to develop a general-purpose intelligent system was largely blamed on the fact that they had put most of their faith into the concept that the key to intelligence lay in symbolic reasoning. This is a mathematical approach in which ideas and concepts are represented by symbols such as words, or sentences, can be processed according to the rules of logic.

This was of course the long-standing idea amongst AI researchers that there is a fundamental set of algorithms that if supplied with enough information will eventually produce an intelligent system. Once discovered, such general algorithms, computer scientists had believed, would then be applicable to all areas of AI research, from natural-language processing to machine vision.

Loss of funding
This lack of success in finding such general algorithms, coupled with the loss of a very large proportion of the research funding for AI, led most of the researchers who remained to concentrate on niche areas where success, and therefore a return on research investment, was most likely. Research into AI largely disappeared to be replaced by a number of more-focussed disciplines that shared one thing in common: the need for a certain amount of machine intelligence or learning capability.

Many of the early projects continued. The development of game-playing programs reached a high point in 1997 with the defeat by a computer system from IBM called Deep Blue of chess grand master Gary Kasparov. Eliza was developed and refined, and in 1995 Richard Wallace developed Alice, a program that is now the world's most successful chatbot. Indeed such AI programs have reached a level of sophistication that allows them to be routinely used in interactive Web sites and automated telephone services by many companies, including Coca Cola and Burger King. Meanwhile mobile robots directly descended from Shakey have successfully explored the surface of Mars.

Editorial standards