X
Innovation

Artificial Intelligence: The edge of research and beyond

Part 3: In the third and final part of a special report into AI, it's time to skirt the edges of science fact and examine where current research may eventually lead, if anywhere
Written by Rupert Goodwins, Contributor

While the practical aspects of AI have fallen short of early promises, research continues into genuine mechanisms of consciousness.

One of the biggest problems is that consciousness itself has proved impossible to define. For a long time it was thought to be the exclusive property of humans, but while few these days would deny it to the higher primates and some other mammals, there is no agreement about where, if anywhere, in the animal kingdom it ceases to apply absolutely.

Some animals are so simple that their nervous systems can be modelled in their entirety — the sea hare in particular has attracted interest because it has very large, relatively few neurons. You can even run a virtual sea slug on your Windows PC. Such experiments, although full of fascinating philosophical conundrums, don't seem a big step forward to AI.

Considerably higher up the evolutionary chain, IBM and the Ecole Polytechnique Fédérale de Lausanne are collaborating on Blue Brain. This is a Blue Gene supercomputer project to model 10,000 complex neurons configured as a rat neocortical column (NCC). This is a basic building block, a unit some 0.5 by 2mm big that's repeated throughout the brain, and is very similar to the human column. The researchers have mapped out the NCC through ten years of dissection and study, and have also built a model of an individual neuron with which to populate the model.

Blue Brain
As it stands, the computer has 8096 processors, each of which can model between one and ten neurons, is capable of 22.8 trillion floating point instructions per second, and is cooled by water from Lake Geneva. The first run of the full 10,000 neuron NCC took place as expected at the end of 2005: the researchers are now on version two of an expected ten versions of the NCC, but aren't saying anything about the results so far.

Once an NCC model is running well, the researchers plan to replicate it to create a full neocortex with around 20 billion neurons. There are plenty of other bits of the brain to recreate too, such as the hippocampus, cerebellum, visual cortex and so on, adding up to around 100 billion neurons in total with 1000 times as many connections. A computer powerful enough to model the entire brain could be built in the next ten years, say the researchers, although well before that point they expect to find useful results about brain data processing, neurological diseases, and how memory and sensation work.

Fortunately, there is plenty of evidence that the brain evolved in a modular fashion. One of the leading contenders for a theory of consciousness is homuncular functionalism, which says that different modules in the brain, none of them capable of consciousness by themselves, cooperate in a parallel processing matrix to produce the end effect of thought. Some of these modules are very well understood — the visual cortex, for example, has been most amenable to study and we now know how a scene is broken down into its constituent parts by grids of neurons set to trigger on basic components.

Internet equals the human brain
It has been noted that since there are around a billion PCs connected to the Internet, each with around a billion transistors, there already exists enough hardware connected together to simulate a human brain. As Kevin Kelly, godfather of Wired magazine and techno-social futurologist says, some of the Internet's processes already approach the capacity and mechanisms of the processes of mind. The Net "processes one million emails each second, which essentially means network email runs at 1Mhz. The Machine's total external RAM is about 200 terabytes. In any one second, 10 terabits can be coursing through its backbone, and each year it generates nearly 20 exabytes of data...

"Both the brain and the Web have hundreds of billions of neurons (or Web pages). Each biological neuron sprouts synaptic links to thousands of other neurons, while each Web page branches into dozens of hyperlinks. That adds up to a...

For more, click here...

...trillion synapses [the connections between neurons] between the static pages on the Web. The human brain has about 100 times that number—but brains are not doubling in size every few years. The Machine is."

Models of the human mind are being fed with information harvested from the real thing. Advances in studies on live, functioning brains through techniques such as neuroimaging using positron emission technology (PET) and functional magnetic resonance imaging (FMRI) show neural activity in real time, to increasingly high precision. FMRI is particularly exciting, as it is non-invasive and completely harmless: it works by inducing hydrogen molecules in the brain to emit radio waves through a very strong magnetic field. With it, the interrelationships between precisely defined areas of the brain can be studied in action again and again in the same subject — individual brains can be wired quite differently — revealing subtle details of processing.

For example, we now know that even while performing specific tasks, the brain activates areas not directly connected with the business in hand. Something involving the right hand which would directly involve the left hemisphere also activates the right. This is thought to allow continuous monitoring and learning through feedback from the task, while being prepared to react to unexpected events.

Quantum neural networks
Nanotechnology is constantly producing new possibilities for computation beyond silicon: semiconducting nanotubes of carbon, optical circuits switching at many terahertz and quantum computer devices. One particular strand of quantum computing is creating particular interest, the quantum neural network (QNN). The characteristics of quantum devices can be described to match those of neural networks quite closely — a neural network takes a wide variety of inputs and sees if they match a condition it had previously learned. It does this by having a lot of processors linked up in a parallel architecture similar to the way neurons are linked in animal brains.

So far, neural networks have been built out of standard processors programmed to emulate neurons and synapses, or specialised circuits which do that emulation in hardware. Stanford professor Kwabena Boahen says that he is "trying to do now is build chips with something like 100,000 neurons, and then build a multiple-chip network that gets up to about one million neurons. With a network of that size, you can model what the different cortical areas are doing and how they are talking to each other".

QNN replaces the neurons with a molecular model, with the synapses replaced by the mathematical interactions of the atomic bonds. There are massive practical problems with this, not least that quantum computers are very sensitive to their environment and hard to program, as well as deep suspicion in the research community that 'quantum consciousness' is a cover for arm-waving new-age pseudoscience. Nevertheless, the mathematics indicates that QNNs may be among the most efficient ways of modelling neural activity.

AIs can build better AIs
Perhaps the most way-out prediction of AI is the singularity, a science fiction concept first proposed as a legitimate event by writer and mathematician Vernor Vinge. In this, the development of AI reaches a point where it exceeds human capabilities and can consequently design ever more powerful versions of itself. This feedback effect would quickly lead to an intelligence far beyond our understanding or prediction, with a variety of results depending on the optimism or otherwise of whoever's doing the prediction.

One of the chief proponents of this is inventor Ray Kurzweil, who has proposed an enhanced version of Moore's Law called the Law of Accelerating Returns. This purports to show that technology in general improves exponentially, with new ideas turning up whenever a barrier is reached. The exact timing of the Singularity has not been agreed, although by extrapolating current trends in computer capabilities a date between 2025 and 2045 is most likely. Against that possibility, it must be noted that with no AI is at present capable of the intelligence of even the more primitive mammals. We'll have to fit in the engineering equivalent of 150 million years of evolution in a couple of decades.

Editorial standards