X
Tech

Neuromorphic computing could solve the tech industry's looming crisis

Brain-based computing could help tech companies overcome the current constraints of chip design.
Written by Jo Best, Contributor

What's the best computer in the world? The most souped-up, high-end gaming rig? Whatever supercomputer took the number one spot in the TOP500 this year? The kit inside the datacentres that Apple or Microsoft rely on? Nope: it's the one inside your skull. 

As computers go, brains are way ahead of the competition. They're small, lightweight, have low energy consumption, and are amazingly adaptable. And they're also set to be the model for the next wave of advanced computing.

These brain-inspired designs are known collectively as 'neuromorphic computing'. Even the most advanced computers don't come close to the human brain -- or even most mammal brains -- but our grey matter can give engineers and developers a few pointers on how to make computing infrastrastructure more efficient, by mimicking the brain's own synapses and neurones.

SEE: Building the bionic brain (free PDF) (TechRepublic)

First, the biology. Neurones are nerve cells, and work as the cabling that carries messages from one part of the body to the other. Those messages are passed from one neurone to another until they reach the right part of the body where they can produce an effect -- by causing us to be aware of pain, move a muscle, or form a sentence, for example. 

The way that neurones pass on messages to each other is across a gap is called a synapse. Once a neurone has received enough input to trigger it, it passes a chemical or electrical impulse, known as an action potential, onto the next neurone, or onto another cell, such as a muscle or gland. 

Next, the technology. Neuromorphic computing software seeks to recreate these action potentials through spiking neural networks (SNNs). SNNs are made of neurons that signal to other neurons by generating their own action potentials, conveying information as they go. The strength and timing of the messages cause the neurones to remap the connections between them, allowing the SNN to 'learn' as inputs change, much as the brain does.

On the hardware side, neuromorphic chips are also a fundamental departure from the CPUs and GPUs used in most computing hardware today. Traditional architectures have been creaking for a while now, with manufacturers finding it harder and harder to cram more transistors on a single chip as they run up against limits of physics, power consumption and heat generation. At the same time, we're generating more and more computing data and consuming more and more computing power, meaning that the super-adaptable, super-powerful, super-low-energy computer in our heads is starting to look increasingly interesting as a technology model.

"Our best computers are stagnating and fluctuating in performance. Now we have a huge rush to find something that can continue the improvement in new computer science that we have seen for the last however-many decades. People are looking around for different technologies, and neuromorphic is probably the most promising one among everything else," says Suhas Kumar, research scientist at Hewlett Packard Enterprise.

Rather than separate out the memory and computing like most chips in use today, neuromorphic hardware keeps both together, with processors having their own local memory -- a more brain-like arrangement -- that saves energy and speeds up processing.

Neuromorphic computing could also help spawn a new wave of artificial intelligence (AI) applications. Current AI is usually narrow and developed by learning from stored data, developing and refining algorithms until they reliably match a particular outcome. Using neuromorphic tech's brain-like strategies, however, could allow AI to take on new tasks. Because neuromorphic systems can work like the human brain -- able to cope with uncertainty, adapt, and use messy, confusing data from the real world -- it could lay the foundations for AIs to become more general.

"The more brain-like workloads approximate computing, where there's more fuzzy associations that are in play -- this rapid adaptive behaviour of learning and self modifying the programme, so to speak. These are types of functions that conventional computing is not so efficient at and so we were looking for new architectures that can provide breakthroughs," says Mike Davies, director of Intel's Neuromorphic Computing Lab.

SEE: Neuromorphic computing finds new life in machine learning

Neuromorphic computing has its roots in computing systems, developed in the late 1980s, that were designed to model the workings of animal nervous systems. Since then, neuromorphic computing has been gathering pace, to the extent that some of tech's biggest names have already produced neuromorphic hardware: IBM's TrueNorth chip and Intel's 128-core Loihi chip and neuromorphic system Pohoiki Beach are already out in the wild, for example.

For now, though, most uses of neuromorphic systems are in research labs: in Intel's case, for example, its hardware is being used in developing an experimental wheelchair-mounted robotic arm for people with spinal injuries as well as in artificial skin to help robots to have an artificial sense of touch. However, they're unlikely to stay that way -- acording to HPE's Kumar, the first commercial systems that rely significantly on neuromorphic computing could be available in five years. 

"Most of the developments that we are seeing in neuromorphic computing are very different and a significant jump from the state of the art; it is expected that it would take a good amount of time before, for instance, things become as scalable as CMOS hardware is today," says Abhronil Sengupta, assistant professor in Pennsylvania State University's School of Electrical Engineering & Computer Science. "There are challenges, yes -- but I also feel that significant progress is being done and will be done to overcome those," he adds.

It's thought that we could first see neuromorphic systems powering robotics and autonomous cars, where probabilistic computing could be particularly useful -- calculating the risk of someone running into the road and whether to alter the car's behaviour accordingly, for example.

As well as expanding the 'what' of AI, neuromorphic computing can expand the 'where' of AI. Rather than handing off AI tasks to cloud systems that need tonnes of power and cooling, the low energy demand of neuromorphic computing means those tasks could potentially be done by hardware like smartphones, tablets, drones, and wearables instead.

"Until now, the story of computing has been more about cramming more devices into a smaller space on a chip. However, going forward it's going to be more about cramming more intelligence, in other words, cramming more functions, into a given volume of material. And that requires innovations in everything from materials to chip architecture and software," HPE's Kumar said.

SEE: Software developers: How plans to automate coding could mean big changes ahead

For neuromorphic to make a more substantial impact, a number of changes will need to happen across the wider tech industry. Sensor technologies aren't set up in a way that works well with neuromorphic systems, for example, and will need to be redesigned to enable data to be extracted in a way that neuromorphic chips can process.

What's more, it's not just the hardware that needs to change, it's people too: according to Intel's Davies, while the hardware is relatively mature, one of the challenges facing the field is in the basic software programming models and the algorithmic maturity. "That's where we really need a genuine partnership with neuroscientists, with a new open-minded breed of machine-learning data scientists to think about rethinking computation in this way," he says.

Neuromorphic computing could lead to a far more integrated collaborative technology industry, one where computing becomes an end-to-end system design problem. Greater collaboration with neuroscientists seems likely, because the brain has far more to tell us about what computing could do better, particularly around algorithms.

For example, Penn State's Sengupta is working on recreating the way glial cells, known as the brain's support cells, affect neurone phase synchronisation for neuromorphic computing. There is the huge potential of unlocking various aspects that might benefit from a brain inspiration perspective, he argues. "Unlocking various other aspects of the brain, like individual components, or the underlying architecture for better algorithm design, I feel is also a very promising pathway going forward," he says.

Editorial standards