X
Innovation

Artificial Intelligence: DNA sequencers to dancing robots

Part 2: In the second part of a three-part special report looking at the past, present and future of AI, we take a look at contemporary applications of the technology
Written by Nick Hampshire, Contributor

During the late 1980s some researchers had started to look again at a concept that had first been proposed back in 1943 by Warren McCullough and Walter Pitts of the University of Illinois. The concept involved building electronic analogues of brain cells and had been refined in the early 1960s by Marvin Minsky and Seymour Papert, who had connected them into simple networks known as perceptrons.

These simple networks could be trained to recognise patterns of input data and had found uses in image recognition but had not been further developed because of difficulties in expanding network size. The researchers who were re-examining this technology discovered that the solution to this problem was to construct a multi-layer perceptron, which we now know as a neural network. Neural networks have been one of the great success stories of AI research and in software or hardware form are now found in a wide variety of applications where the system's ability to learn is important.

Dancing Honda
The remarkable ability of neural networks to learn complex tasks is best demonstrated by the ability of Honda's Asimo humanoid robot to not just walk but dance, and even ride a bicycle. Actions that are only possible because the neural networks that are connected to the robot's motion and positional sensors and control its 'muscle' actuators are capable of being 'taught' to do a particular activity.

The significance of this sort of robot motion control is the virtual impossibility of a programmer being able to actually create a set of detailed instructions for walking or riding a bicycle, instructions which could then be built into a control program. The learning ability of the neural network overcomes the need to precisely define these instructions — instead the robot is 'taught' to perform in a particular way and creates its own instructions within its neural net. This makes neural nets particularly well suited for systems which have to adapt to changing environments.

However, despite the impressive performance of the neural networks controlling Asimo's movement the most significant applications for neural networks are currently to be found in everyday objects, such as a new fire detector that has just been launched by Siemens. This uses a number of different sensors and a neural network to determine whether the combination of sensor readings are from a fire or just part of the normal room environment. Over fifty percent of fire call-outs are bogus, and of these well over half are due to fire detectors being triggered by everyday activities as opposed to actual fires.

Neural networks
The neural network in the Siemens detector allows the fire detector to be trained to recognise the normal pattern of temperature, smoke colour and particle size found in the room in which it is located, as opposed to having these parameters pre-set in the factory. This means that a detector placed in a workshop would ignore dust and machinery exhaust, but if it was placed in a bathroom it would ignore steam, but in both locations it would react to an above normal rise in temperature coupled with dark smoke.

This is just one of a vast range of current applications for a neural-network based systems. If you use a digital camera then the odds are that the auto-focussing system is based upon a neural network. In the military world neural nets are an essential component of virtually every smart-weapons system and every countermeasures system. They are used in DNA sequencing equipment, in voice recognition and response equipment. And in these days of heightened security neural nets a key part of virtually every biometric analysis system. Indeed neural networks have become a key component in the design kit of every systems engineer.

It is all a question of probability
One thing that became clear to researchers during the 1980s was that knowledge systems using conventional logic would never produce the kind of intelligent response that was being sought. Real world decisions are based not upon certainty but upon probability.

Dealing with uncertainty involved the use of statistical reasoning techniques where probabilities were assigned to every option. The most popular of these mathematical techniques is known as Bayesian statistics, and when combined with graph theory it provides a powerful means of modelling probabilities based upon continuously updated information.

Bayesian networks can dynamically learn by constantly modifying modelled probabilities using a fixed set of rules, and it is a technique that has proved very popular in a wide range of applications including diagnostic and decision-making systems, data mining, computer vision, bioinformatics and of course robotics. Bayesian networks are...

For more, click here...

... extensively used in the machine-learning capabilities of both Amazon and Google, and have proved an efficient way for a spam filter to learn the characteristics of spam by analysing previous accepted and rejected messages. However, not all Bayesian systems are appreciated: the much-despised Microsoft Office paper-clip assistant was also based upon a Bayesian inference system.

A good example of the way that the learning capability of Bayesian networks are being used is to be found in their use by an increasing number of computer game designers. They allow the game to dynamically learn from the human player and thus be better able to anticipate his or her moves. For example the Drivatars from Forza Motorsport that have been developed by Microsoft for this Xbox racing program include a sophisticated AI capable of probabilistic racing-line generation. This software is not only used to learn from the players driving techniques and react to them but it can also be used to create the racing line for all the computer generated vehicles by learning from the actions of a real human driver on a real course.

Intelligent agents
Intelligent agents are another form of artificial intelligence software that has now found a wide range of applications. An intelligent agent is a goal-directed, autonomous, persistent and intelligent piece of code that is designed to perform a specific function. Agents can be used to monitor real time events or search through databases, and when provided with communications capability multiple agents can be used to solve many inherently complex problems.

One area where intelligent agents are being employed in deadly earnest is in financial markets. Here algorithmic trading systems, as such agents are known, are being routinely used to decide exactly when to buy or sell shares or commodities. These agents allow investment managers to buy or sell large holdings without moving the market, they do this by sharing the buying or selling task between a large number of agents each of which has the power to monitor the market and decide exactly when to buy or sell.

According to Richard Balarkas of Credit Suisse First Boston "agents are very sophisticated [and] are doing what a trader would like to do". Whereas a human trader may only look at three or four variables before buying or selling, an agent may look at hundreds.

These agents not only relieve financial traders of a routine task, but they are also capable of doing it much better. In 2001 IBM conducted a trial of trading agents and this demonstrated that when pitched against each other the intelligent agents did better than their human counterparts and made on average 7 percent more cash.

Desert challenge.
Back in 2004 the US Defense Advanced Research Projects Agency, DARPA, issued a challenge to AI researchers to create an autonomous vehicle that could take part in a 212km race against other similar vehicles on a course that ran through some extremely rough desert terrain. The prize for the team whose vehicle completed the race in the shortest time was $2m.

On 8 October 2005, 23 robot vehicles took part in the race which took them on a dirt road course along narrow mountain tracks and through tunnels to a finishing line outside Primm, Nevada. Five of the vehicles completed the course, with the winner, a converted Volkswagen Touareg R5 SUV entered by the artificial intelligence lab at Stanford University California, taking just 6 hours and 53 minutes.

The winning vehicle was fitted with a range of sensors, including video cameras, radar, accelerometers, laser range finders and a GPS system, along with six computers and some very advanced AI software. The vehicle's software had been trained by being driven over 2000km of desert track during which time the sensors observed the terrain through which the vehicle was driven and what actions the human driver took to stay on course.

The DARPA challenge was prompted by plans to use AI and autonomous vehicle technology both in the next generation of military vehicles, and in a new generation of planetary rovers. However, the technology demonstrated in this challenge may also result in the near future in AI systems being used to improve vehicle safety.

A matter of common sense
Marvin Minsky co-founder of the world famous MIT Artificial Intelligence Laboratory declared in a recent speech at Boston University that "AI has been brain-dead since the 1970s." He was referring to the fact that researchers have since then been primarily concerning themselves with small facets of the machine intelligence problem as opposed to looking at the task as a whole.

Throughout the 1980s researchers developed expert systems that emulated human expertise in tightly defined areas like law and medicine, and in such areas...

For more, click here...

...extensively used in the machine-learning capabilities of both Amazon and Google, and have proved an efficient way for a spam filter to learn the characteristics of spam by analysing previous accepted and rejected messages. However, not all Bayesian systems are appreciated: the much-despised Microsoft Office paper-clip assistant was also based upon a Bayesian inference system.

A good example of the way that the learning capability of Bayesian networks are being used is to be found in their use by an increasing number of computer game designers. They allow the game to dynamically learn from the human player and thus be better able to anticipate his or her moves. For example the Drivatars from Forza Motorsport that have been developed by Microsoft for this Xbox racing program include a sophisticated AI capable of probabilistic racing-line generation. This software is not only used to learn from the players driving techniques and react to them but it can also be used to create the racing line for all the computer generated vehicles by learning from the actions of a real human driver on a real course.

Intelligent agents
Intelligent agents are another form of artificial intelligence software that has now found a wide range of applications. An intelligent agent is a goal-directed, autonomous, persistent and intelligent piece of code that is designed to perform a specific function. Agents can be used to monitor real time events or search through databases, and when provided with communications capability multiple agents can be used to solve many inherently complex problems.

One area where intelligent agents are being employed in deadly earnest is in financial markets. Here algorithmic trading systems, as such agents are known, are being routinely used to decide exactly when to buy or sell shares or commodities. These agents allow investment managers to buy or sell large holdings without moving the market, they do this by sharing the buying or selling task between a large number of agents each of which has the power to monitor the market and decide exactly when to buy or sell.

According to Richard Balarkas of Credit Suisse First Boston "agents are very sophisticated [and] are doing what a trader would like to do". Whereas a human trader may only look at three or four variables before buying or selling, an agent may look at hundreds.

These agents not only relieve financial traders of a routine task, but they are also capable of doing it much better. In 2001 IBM conducted a trial of trading agents and this demonstrated that when pitched against each other the intelligent agents did better than their human counterparts and made on average 7 percent more cash.

Desert challenge.
Back in 2004 the US Defense Advanced Research Projects Agency, DARPA, issued a challenge to AI researchers to create an autonomous vehicle that could take part in a 212km race against other similar vehicles on a course that ran through some extremely rough desert terrain. The prize for the team whose vehicle completed the race in the shortest time was $2m.

On 8 October 2005, 23 robot vehicles took part in the race which took them on a dirt road course along narrow mountain tracks and through tunnels to a finishing line outside Primm, Nevada. Five of the vehicles completed the course, with the winner, a converted Volkswagen Touareg R5 SUV entered by the artificial intelligence lab at Stanford University California, taking just 6 hours and 53 minutes.

The winning vehicle was fitted with a range of sensors, including video cameras, radar, accelerometers, laser range finders and a GPS system, along with six computers and some very advanced AI software. The vehicle's software had been trained by being driven over 2000km of desert track during which time the sensors observed the terrain through which the vehicle was driven and what actions the human driver took to stay on course.

The DARPA challenge was prompted by plans to use AI and autonomous vehicle technology both in the next generation of military vehicles, and in a new generation of planetary rovers. However, the technology demonstrated in this challenge may also result in the near future in AI systems being used to improve vehicle safety.

A matter of common sense
Marvin Minsky co-founder of the world famous MIT Artificial Intelligence Laboratory declared in a recent speech at Boston University that "AI has been brain-dead since the 1970s." He was referring to the fact that researchers have since then been primarily concerning themselves with small facets of the machine intelligence problem as opposed to looking at the task as a whole.

Throughout the 1980s researchers developed expert systems that emulated human expertise in tightly defined areas like law and medicine, and in such areas...

For more, click here...

they could successfully match users' queries to the appropriate cases, or diagnoses. But, and this is the core of Minsky's criticism, such systems were incapable of learning a general concept that could be easily understood by a three-year-old. Concepts such as 'fire is hot' and 'water is wet'.

Minsky and other researchers have maintained that this is because such systems did not accumulate "common-sense knowledge", simply concentrating on specialist knowledge. One researcher has persisted in this area, Douglas Lenat, formerly Principal Scientist at the MCC. In 1984, Lenat set up a company called Cycorp with the aim of codifying, in machine-usable form, the millions of pieces of knowledge that comprise human common sense.

This is a truly massive project but in Lenat's view, "Once you have a truly massive amount of information integrated as knowledge, then the human-software system will be superhuman, in the same sense that mankind with writing is superhuman compared to mankind before writing." Now over 20 years later this system contains over three million assertions about the world in general.


Virtual common sense

The Cyc system has already proved that perhaps researchers like Lenat and Minsky were right about the need for a comprehensive common sense reasoning system. It has done this by demonstrating the ability to make correct deductions about things it has never learnt about directly. Soon its developers believe the system will be able to collect data on its own without human assistance.

What is significant about Cyc is not just the amount of work that has gone into creating it but the fact that through the CycFoundation Lenat is making it freely available, thus allowing developers of other system to make use of this 'common sense' knowledge base to improve the performance of systems. This means that we can expect to see Cyc being incorporated into a wide range of different AI applications in the near future.

Where are the intelligent machines?
So, are we surrounded by intelligent machines, or are we still waiting? Both yes and no. We do not of course have the positronic robots of Asimov's imagining, but machines are becoming more intelligent.

We have dancing and cycling robots like Asimo, and we have vehicles that can find their own way around a desert in California or the surface of Mars. Computers have also beaten the world's best chess players, and scored well in a range of other games played against humans, and artificial intelligence is now a basic component of many computer games.

In fact artificial intelligence techniques are being applied everywhere, in electronic equipment, in telecommunications, the Internet, in defence, computer software, consumer goods, and in business. They are helping to make machines smarter, not smart enough to pass Turing's test for intelligence, but smart enough to make basic decisions based on learning from experience.

This wide use of AI techniques is helping to restore optimism about the future of machine intelligence. Researchers, like those involved in the 'common sense' knowledge system Cyc, are showing renewed confidence in the abilities of their systems. New applications are starting to emerge for such systems, and most importantly the funding for artificial intelligence research is starting to return. Today we could be looking at a new dawn for the quest to create an intelligent machine.


Editorial standards