X
Tech

The machine that wanted to be a mind Pt II

Recreating a sense of self is an infinitely confusing goal
Written by Rupert Goodwins, Contributor

Our minds are many things, but they are not pure products of computation. What we consider our sense of self is as much mediated by our personal history and experiences and our interaction with others as it is by the raw grey matter of the brain. And mind is not what it seems -- we think we live in the present, where we perceive events and have thoughts in real time.

We don't -- we live about half a second in the past. Brain scans show that is how long it takes for a perception to become fully integrated with our awareness. Yet we can catch a cricket ball, drive a car and communicate with each other much more quickly than that -- all aspects of being that are in some way removed from our immediate awareness while seemingly part of it.

And the human brain, despite having many similarities to a computer, is unthinkably complex. Each of the neurons in the brain can have up to a thousand connections. When a certain set of conditions at those connections is met, the neuron fires -- but those conditions can vary from moment to moment, depending on what happened last, what state the neuron's in, what the conditions around it are. That's a very large number of variables to take into account, which the neuron does up to a thousand times a second -- and there are 100 billion neurons in the human brain. That's up to 100 trillion synapses, each of which would have to be modelled in a replica system.

Enormous numbers, even if researchers such as Hans Moravec of Carnegie Mellon University think that we'll achieve the necessary 100 million MIPS/100 million MB machines in twenty to thirty years' time. He points out that the human brain has many more neurons than it does bits of DNA -- in other words, the neurons are put together according to a coded scheme, rather than individually, and that much of the creation of mind probably happens after birth. Our machines, should we build them, may programme themselves as babies do.

Yet learning itself remains a mystery. As AI researcher Oliver Selfridge of MIT asks: "Can you think of a chore or duty that a human being doesn't do better the second time, or a chore or duty that a computer does that it does do better the second time?"

He points out that an impressively profound form of learning is called common sense, to which increasing attention is being paid. AI should above all be able to learn from its mistakes and its environment, something that neural networks and genetic or evolutionary engineering include in their basic technology.

Yet another strand of thought says that, as 50 percent of the brain is devoted to processing perceptions, more thought should be given to the mechanics of working out what's happening in the real world and combining these experiences into a cohesive map of the world outside the machine.

After fifty years, there's not much coherence to AI and cognitive science. It's a field where, in the absence of strong direction and much controversy, alternative theories abound. One of the more intriguing is that of quantum consciousness -- that thought emerges from the interactions of components at a sub-atomic scale. The idea's been around since the 80s, but has suffered somewhat from an almost total lack of ideas about how it might work -- and plenty of reasons why it might not.

Large-scale organised quantum events have only ever been observed in laboratory conditions where thermal and other chaotic effects are carefully removed, which cannot be said of the warm wet gloop in the brain.

However, anaesthesiologist Stuart Hameroff has recently proposed that microtubules, tiny structures within cells, may be the site of quantum consciousness. He's been joined by mathematician and consciousness theorist Roger Penrose, who also proposed quantum mechanisms for thought in his book The Emperor's New Mind.

Microtubules are cylindrical molecules around 25 nanometres across that were fairly recently discovered -- embarrassingly, the solvent once used to make specimens for electron microscopy dissolves their proteins, and it wasn't until the solvent was changed in the 70s that their existence was first spotted. The microtubules fit together like Sticklebricks to form an internal skeleton within a cell, and this is normally thought to be their major function.

However, they also possess the ability to switch between states in a nanosecond -- one of the fastest biological processes known -- and may well cooperate to process information in single-cell animals. Unfortunately, there's no known mechanism that would allow them to cooperate with other microtubules across cell walls. Which is where quantum effects come in, says Hameroff.

Exactly how or what is still open to conjecture, and whether there's more to it than the joke philosopher David Chalmers made -- "Consciousness is a mystery, quantum mechanics is a mystery. When you have two mysteries, well maybe there is really only one. Perhaps they are the same thing" -- remains to be seen.

So AI is a goal that nobody knows how to achieve, or whether it's achievable at all, or what it'll be when we get there. Not a strong candidate for hope. However, the spinoffs of research to date continue to be important and as computers get more powerful they'll be able to perform more apparently intelligent functions.

Whether this will eventually include true sentience like HAL9000 -- and whether HAL was truly intelligent -- is as debatable as the number of angels that can dance on the head of a pin. But if we interact with computers as if they were human and abdicate decisions to them as soon as they seem capable of coping, one day we may wake up and find that we've created an intelligent world by accident.

Our current study of strong AI is to the real thing what alchemy was to chemistry. Lots of people are searching for the philosopher's stone, but through sympathetic magic and stabs in the dark rather than through a comprehensive understanding of the problem. Like chemistry, real AI may turn out to be completely different to the thing for which we search in our ignorance. Twenty more years of study and twenty more years of technology, and we might just start knowing where to go.

Take me back to Pt I/ The machine that wanted to be a mind.

In ZDNet's Artificial Intelligence Special, ZDNet charts the road to sentience, examines the technologies that will take us from sci-fi to sci-fact, and asks if machines should have rights.

Have your say instantly, and see what others have said. Click on the TalkBack button and go to the ZDNet News forum.

Let the editors know what you think in the Mailroom. And read what others have said.

Editorial standards