X
Tech

The machine that wanted to be a mind

Machines may be adept at going through the processes, but what would it take for them to reach understanding?
Written by Rupert Goodwins, Contributor

Artificial intelligence is one of humankind's greatest and oldest ambitions. The quest for non-human intelligence has captivated magicians, astrologers and mystics for as long as such professions have existed, but it took Aristotle to kick things off properly. He was the first to start organising laws of thought and the way they interact with the real world -- the basic concepts behind AI. That was in the third century BC, and 2,300 years later we still haven't cracked the problem.

Part of the trouble is that nobody knows what AI is. In fact, nobody even knows what I is. While consciousness is something we all have and enjoy every second of our waking lives, nobody knows how it all really works. A thought is as difficult to isolate from our mental experiences as a single breath of wind is from the weather.

A new sister discipline to AI, cognitive science, has started up to try and track down the mechanisms of mind: while it can say that linguistics, philosophy, neurochemistry, anthropology and so on are all part of the mix, it can't say how these combine to make up our selves.

All this hasn't stopped research into machines that think. Since the 1950s, when Alan Turing famously predicted that by the year 2000 machines would be able to pass as human in conversation, the field has attracted high hopes, brilliant minds and heartbreaking failure in equal measure. Because 50 years of failure eventually starts to affect funding, even in academia, the AI field has diversified and experts have established themselves in other areas where they can be said to have had some success.

There are now two sorts of AI: strong AI, which is the business of making computers think, and weak AI, which has computers modelling some aspects of human behaviour. Marketing people love weak AI -- if you see a product described as having artificial intelligence or being 'smart', the chances are that it's got some aspect of weak AI in it.

Weak AI has the enormous benefit that it can be described. There are a load of approaches, techniques and tools that have evolved. Knowledge engineering -- where information about something is codified and put into a database -- is of great commercial value, even if it has moved well away from its cognitive beginnings.

Fuzzy logic, visual recognition, natural language processing and other ways of dealing with real-world data all have their roots in AI while paying back some of the research investment in other fields -- the paperclip in Microsoft Office uses a strand of weak AI called Bayesian belief networks, while the grammar checker comes from AI language research. Yet while all these model part of what we know about our capabilities as sentient beings, none seem close to providing true sentience. Whatever that is.

Some philosophers think that mind is impossible to model: John Searle of the Philosophy Department at Berkeley has proposed this with his Chinese Room analogy.

Take a room with two slots in the wall, an English-speaking man inside and a rulebook. The rulebook tells him how to deal with Chinese sentences that are pushed through the slot -- how to choose characters with which to reply, and what order to send them back out through the second slot. The responses may be perfect Chinese, but it does not logically follow on that the man is actually understanding the language as a native speaker would, rather than merely processing it.

Thus, says Searle, any machine with a programmed set of responses cannot be considered intelligent. Unsurprisingly, this is not a popular position among cognitive scientists, but the fact that such a consideration still has currency shows that the AI as a field is a long way from being as established as physics.

Less drastically, many researchers, such as James Hendler of the University of Maryland, say that whatever machine consciousness is created is unlikely to be like ours. "Is your cat aware?" asks Hendler, pointing out that it undoubtedly is in some ways, but not in others.

A lot of AI research is dogged by the feeling that if something is intelligent, it should be in some way like us. Alan Turing's test for conversational skills in a computer has been the subject of some high profile prizes and much research but consistently produces results that, even when carefully limited according to subject matter and style, would make a toddler chuckle.

On the other hand, a program like Eliza, which makes no claims towards intelligence but merely reiterates what the user types, can easily fool some people for some time into thinking they're talking to a human. Nevertheless, there is no alternative starting point for AI other than ourselves.

Take me to Pt II/ Recreating a sense of self.

In the future your PDA will think for you. You will be its Personal Person Assistant. Find out how the future could be in ZDNet's Artificial Intelligence Special.

Have your say instantly, and see what others have said. Click on the TalkBack button and go to the ZDNet News forum.

Let the editors know what you think in the Mailroom. And read what others have said.

Editorial standards