X
Business

AI: The story so far Pt II

AI research: What computers became capable of
Written by Graeme Wearden, Contributor

Written in 1955, The Logic Theorist is generally considered to be the world's first AI program. Based on lists of information, it attempted to solve a mathematical problem by representing it as a "tree", and would choose the branch that it decided was most likely to produce the correct answer to a given problem.

In the following year, the Dartmouth Summer Research Project was held. Organised by John McCarthy -- regarded as the father of AI and appointed professor emeritus at Stanford University in January 2001 -- it was a gathering of those individuals who were working towards the goal of creating an intelligent machine. This conference did a great deal to establish the field of AI, by bringing researchers together and even by adopting the term "artificial intelligence" to describe their work.

In 1958, McCarthy created the LISP (LISt Processing) programming language. This rapidly became the principle language for AI programming, although recent high-powered computers have now supplanted it. Unlike previous languages, LISP's dynamic nature was designed to made it easy for a program to treat its own code as data. By treating everything as a list of instructions, including itself, an AI program could modify itself in response to results that it generated itself. The significance -- a program can effectively think and learn by rewriting its rules.

AI developed as more and better algorithms were devised to solve problems, and as government and industry invested in the sector. Individual programs were developed to carry out certain tasks related to intelligence -- such as Student, which could solve algebra story problems, and SIR -- capable of interpreting English sentences

Another variety of computer system, dubbed the "expert system", which took off in the 1970s, was designed to simulate the problem-solving behaviour of a human and was able, in theory, to predict the probability of a solution under fixed conditions.

An expert system consists of two parts, a knowledge base and a control system. The former is made up of three parts. Firstly, a large general knowledge base containing rules for problem solving. Secondly, an area of case-specific or heuristic knowledge, which contains data gathered by experience and is more subjective and individual. Thirdly, there is a knowledge base editor that is used for debugging and to add new information.

Users interact with the expert system via the control system, which allows problems to be posed to the knowledge base. The control provides an interface to make access easier and explains the system's reasoning.

By separating the act of questioning from the knowledge base, expert systems moved AI towards a more "natural" representation of problem solving. Their advantage is that they can be built to solve a unique task as well as a human, if they are given all the relevant data.

The first expert system that tried to mimic human questioning was MYCIN. It diagnosed bacterial blood infections and suggested treatments. It also showed both the strengths and weaknesses of expert systems.

Although MYCIN could diagnose certain illnesses better than practising doctors, its knowledge only extended to bacteria, symptoms and treatments. It did not consider concepts such as death, recovery and time-related events, which lowered its value as a diagnostic tool. This meant that the system was still reliant on the common sense of the user.

In an attempt to improve expert systems, programmers introduced Fuzzy Logic. This was a variation of conventional (Boolean) logic designed to handle the concept of "partial truth" -- those values between "completely true" and "completely false". The addition of Fuzzy Logic made expert systems more adept at tasks such as pattern recognition and forecasting the behaviour of the stock market.

However, AI systems were still a long way from achieving the success that researchers had expected. For example, when Alan Turing suggested the Turing Test in 1950 he expect that an AI machine would be powerful enough to pass off as a human within 50 years.

The problem, according to scientists, lay in the hardware design. Computers still followed the sequential design of John von Neumann, a key figure in the development of both computer science and artificial life who in the 1940s had developed the concept of the processor.

Although processors were capable of carrying out millions of instructions per second, there was still a bottleneck of information waiting to be manipulated. This acted as a brake on AI development, because the more knowledge an AI system was given the slower it processed data. This was in marked contrast to humans, where having more knowledge generally enabled them to think faster.

The solution was to make computers process in parallel. This idea had been initially suggested by John Holland, probably the first person in the world to receive a doctorate in computer science, in the 1960s, but was taken up by Danny Hillis who designed and built the "Connection Machine".

The first version of this computer, CM-1, debuted contained 16,000 processors working in parallel, a feat never achieved before. It was followed in 1987 by CM-2, which boasted 65,536 processors. Like a brain, in which neurons are constantly interacting in order to produce thought (or so we are told), the processors of the Connection Machine each crunched numbers and "talked" to each other.

The Connection Machine was an early example of what became known as Artificial Neural Networks (ANNs). Once suitably powerful algorithms were developed in the mid-1980s, ANNs were widely used in AI. They have the capacity to learn, memorise, and to create relationships between data -- just like the natural neural networks they imitate.

Other benefits of ANNs include the ability to cope with missing and "noisy" (where there is a large amount of non-relevant information in the sample) data, to work with large numbers of variables, and to handle the non-linearities of the real world.

ANNs are commonly used for image, speech and character recognition.

Throughout the 1980s AI attracted attention and funding from the private sector, and by 1987, world-wide revenue for AI, excluding robotics, totalled £204m.

However, the dreams of those early pioneers have not yet been realised. There have been individual successes: a chess-playing computer which defeated a grandmaster in 1986, IBM's Deep Blue which won a game against World chess champion Garry Kasparov in 1996, and Wabot-2 which in 1984 could play the organ and read sheet music.

But, 50 years since it was devised, AI researchers have yet to design a machine capable of passing the Turing Test, despite great advances in processor speed and masses of funding. Strong AI -- the dream of creating a machine with human-level intelligence -- is still only a dream.

Take me back to Pt I/ AI: The story so far.

In the future your PDA will think for you. You will be its Personal Person Assistant. Find out how the future could be in Artificial Intelligence Special.

In the future your PDA will think for you. You will be its Personal Person Assistant. Find out how the future could be in ZDNet's Artificial Intelligence Special.

Have your say instantly, and see what others have said. Click on the TalkBack button and go to the ZDNet News forum.

Let the editors know what you think in the Mailroom. And read what others have said.

Editorial standards