X
Tech

Turning pings into packets: Why the future of computers looks a lot like your brain

In the future, circuits and systems modelled on human brains could end up in everything from supercomputers to everyday smartphones.
Written by Jo Best, Contributor
braincomputers.jpg

The neurones and synapses of the human brain are serving as the inspiration for the next-generation of processor hardware.

Image: Getty Images/iStockphoto

If you were looking for a model of the next wave of computing hardware, you could do worse than turn to the human brain: it's small, energy efficient, and has been functionality unmatched by any other machine you'd care to name.

Given the performance of the human brain is still orders of magnitude ahead of the most powerful supercomputer in existence, yet it requires orders of magnitude less space to house it and energy to run it, researchers believe technology that mimics the human brain -- known as neuromorphic computing -- could be the future of computing.

Despite its name, the aim of neuromorphic computing is not simply to model the workings of human grey matter (though researchers are indeed using it for that). Instead, neuromorphic computing is using the human brain as the inspiration for a new wave of low-energy, high-power hardware that could end up in everything from supercomputers to smartphones.

"What we are trying to do with neuromorphic computing is basically to build a system, which contains either analogue or digital circuits, that can mimic neurobiological architectures that we have in our system. Why we are trying to do this is because we hope we can get lower energy consumption and higher areal density for complex computational tasks, like pattern recognition, which our brains can do very easily, but even the best computers right now require a lot of power and a lot of energy to do these simple tasks," Manuel Le Gallo, one of IBM's neuromorphic computing researchers, told ZDNet.

The brain uses chemical signals to convey data from one nerve cell, known as a neurone, to the next. Each neurone communicates with the next by releasing ions into the gap between them, called a synapse. If sufficient ions released by the first neurone cross the gap and reach the second, an electrical signal known as an action potential is generated within that second neurone and it fires, communicating with a third neurone by either passing on the electrical signal, or releasing ions of its own to pass on a chemical one. This system of converting chemical signals to electrical signals and back again allows data to be relayed to the brain, and then sent from the brain to other parts of the body.

Neurones don't simply do one-to-one communication either: one neurone can get inputs from many others and integrate them all before communicating with its successor. It will send out an action potential either if it gets a lot of inputs from a few other neurones in a short period of time, or if it gets single inputs from a greater number. And, being a highly adaptable organ, the connections between different neurones will adapt as a person ages and learns.

It's the brain's use of variability and plasticity that researchers believe could eventually become a model for the next wave of computing. Rather than using binary states of one and zero as inputs and outputs, in neuromorphic computing, data coming in and going out is weighted by algorithms stored on the hardware, allowing the chip to 'spike' -- produce its own action potential -- when a certain weight threshold is crossed.

The idea of neuromorphic computing first gained popularity in the 1990s after Caltech professor Carver Mead suggested a new model for microprocessors: rather than having computing split between memory and processing, it could all be performed on the same chip, using connections modelled on human synapses.

"The separation between the CPU and the memory is now the main challenge in our computers, because transistors don't scale anymore and we're basically stuck. We have to come up with new architectures to improve the computing industry," Gallo said.

There are several different technological approaches to how to render neurones and synapses into computing hardware and software, using traditional circuits, analogue CMOS processors, phase change memory, and others.

IBM has employed the latter to create randomly spiking neurones for use in neuromorphic computing. As electrical impulses reach the phase-change material, it gradually crystallises, eventually reaching a certain threshold of crystallisation that allows it to fire. According to IBM, the biological equivalent is putting your hand on something hot -- if a neurone receives enough inputs telling it the skin on your hand is in contact with too high a temperature, the sensory neurone will fire, and the information will be conveyed to the brain. With that data, the brain can tell the hand to move.

Earlier, IBM's neuromorphic computing efforts also yielded the TrueNorth processor, a large, low-power CMOS chip with one million programmable neurons, 256 million programmable synapses, and 46 billion synaptic operations per second per watt.

Meanwhile, the BrainScaleS system from the University of Heidelberg in Germany uses wafer-scale integration of analogue circuits that recall the physical setup of neurones, but work 10,000 times faster than they do.

Another neuromorphic system, SpiNNaker, is being used for modelling the human brain -- the system's name is short for 'spiking neural network architecture'. By connecting together mobile processors using a brain-like architecture, the system is available as an open resource for scientists looking to model the brain's own processes in biological real time. A half-million core version of the system is already up and running as part of the Human Brain Project, and it's hoped a one million core version isn't too far behind.

"The key innovation in SpiNNaker is not in how we do the computing -- it looks like a fairly conventional parallel processor -- it's in how we connect the core together and support communication. The brain is characterised by extremely high connectivity. We implement that on SpiNNaker by using spikes -- communication in the brain is principally by spikes; neurones go ping every so often and we turn pings into packets. We convey that packet around the machine in a very lightweight mechanism, it's intrinsically multicast, so we can connect that ping to a thousand or ten thousand destinations," said Stephen Furber, professor of computer engineering at the University of Manchester, who works on both SpiNNaker and BrainScaleS -- all done in biological real time of sub-millisecond thresholds.

Conventional computers can do the same, though they also use packet switching, as they're set up for carrying larger amounts of data in chunky packets, rather than the 40 bit packets that SpiNNaker uses for its pings. As well as helping in neuroscience research, the system is also being used in robotics applications.

With its promise of massively parallel computing that's both cheap and energy efficient, it's no wonder hardware giants like IBM and Qualcomm, as well as the US defence technology agency DARPA, are already looking into neuromorphic computing's potential. And, despite the technology being very much in its experimental stage, the market for it is growing: according to researchers Markets and Markets, it will be worth over $273m, up from $6.6m this year.

Despite the interest around neuromorphic computing, its commercialisation is a long way off and, when it eventually does hit the market, it's unlikely to make its way into common-or-garden computing devices straight away.

"I don't think this is going to replace the computers we have now [soon]. What I think it will be is more like an accelerator: you could have a neuromorphic chip next to your computer and use it for certain tasks that require it, and use the 'today' computer for the rest," IBM's Le Gallo said.

Neuromorphic chips that can help make sense of big data in the fields of financial transactions, weather, or sensor networks could reach the market within a couple of years, but a more generalised use of the technology is likely to be several more years away.

For most people, their first contact with neuromorphic computing may one day come through their smartphone's personal assistant.

"Neuromorphic [systems] have attracted a lot of interest over the last five years, because although they're not built directly for deep network machine learning, they are quite close to that sort of thing. We now have quite a lot of companies talking to us saying 'can we use SpiNNaker to explore this idea?'," University of Manchester's Furber said.

"The smartphone people are all very keen to get all the speech and object recognition into your phone. You can talk to Siri, but Siri's in North Carolina. If your phone isn't connected, Siri isn't connected either. They want to change that and move that into the handset... everyone's searching for a solution and they're aware neuromorphics have the potential to do some of this stuff very efficiently."

Read more science and health tech stories

Editorial standards