AI: The story so far
The term artificial intelligence, or AI, was coined at the ground-breaking Dartmouth conference of 1956. But man's interest in the notion that a machine could be given the ability to think can be traced back to the myths and stories of the ancient world.
The Greek myth of Pygmalion, who created a living statue, and the legend of the Golem, a clay statue brought to life by a Jewish Rabbi, bear testimony to man's curious obsession with playing creator.
The philosophers of Ancient Greece also influenced many aspects of modern technology -- the Boolean logic upon which the circuitry of today's computers is based has its origins in the symbolic logic of the Greeks.
In the centuries that have followed, myriad philosophers actively pondered some of the same questions addressed by present-day AI researchers. For example, the great French thinker Descartes -- whose statement, "I think, therefore I am", identified that to be intelligent is to be alive -- believed that animals were no more than automatons, or self-moving machines. But he was foxed by the Queen of France who insisted on seeing proof that a clock could reproduce.
Other medieval attempts at AI included The Turk, an allegedly intelligent chess-playing machine (which turned out to be a small man hidden in a box).
More serious research into artificial intelligence began at the start of the twentieth century, although at its end we are still a long way from creating truly intelligent creatures. To date, researchers have only managed to achieve "Weak AI", as in creating machines with some aspects of human behaviour. "Strong AI" -- making machines think -- is yet to come.
In 1910 a pair of philosopher-mathematicians, Bertrand Russell and Alfred North Whitehead, published Principia Mathematica, which laid down many important rules of mathematical logic. Although their work was undermined by Kurt Goedel who in 1931 proved that mathematics could not be proven by rules of deduction, the concept was taken up by pioneer of computer logic Alan Turing.
Turing's work into logical theory did a great deal to make the digital computer, and thus AI, possible. He believed every mathematical question could be described as an algorithm, and postulated in his 1937 academic paper On Computable Numbers that it was possible to design a machine (later called a Turing Machine) capable of solving a particular algorithm. It followed that every possible algorithm could be solved by a particular Turing Machine.
This led to the concept of a Universal Turing Machine. This hypothetical device could take on the configuration of any Turing machine depending on external conditions. The machine would receive an input of information, refer to a rule table that controlled its behaviour, and by considering its current state and the new input it would decide what internal state to adopt in this particular time step.
For this reason, the Universal Turing Machine could solve all mathematical problems. If one accepted that at any one moment the human brain was in one of a finite number of states, then the Universal Turing Machine was capable of duplicating its function.
Of course, the Universal Turing Machine was purely hypothetical. As well as requiring an exceptional amount of storage to contain all the necessary rules, it also depended on the belief that the human mind could be defined in a finite number of states and that a rule table could be drawn up to emulate it. Many scientists rubbished this concept.
Alan Turing's other contribution to the field of artificial intelligence was the Turing Test of 1950, described by the Oxford Companion to the Mind as "the best test we have for confirming the presence of intelligence in a machine".
The test basically consisted of a person quizzing both another human and a computer without knowing which was which. If the questioner was unable to distinguish between the two, then the computer deserved to be deemed "intelligent".
The Turing Test is still seen as a benchmark in AI, and no machine has yet passed it.
But it was the invention of the electronic computer in 1941 that made it possible to experiment with AI, rather than simply pondering the issues.
Although early computers were massive beasts occupying whole rooms, they could be programmed. Early coders, though, had to manually configure thousands of wires to create a program -- a problem fixed by the stored program computer in 1949.
AI research soon followed.
Take me to Pt II/ What computers became capable of.
In ZDNet's Artificial Intelligence Special, ZDNet charts the road to sentience, examines the technologies that will take us from sci-fi to sci-fact, and asks if machines should have rights.
Have your say instantly, and see what others have said. Click on the TalkBack button and go to the ZDNet News forum.
Let the editors know what you think in the Mailroom. And read what others have said.