X
Innovation

Who needs a supercomputer? Your desktop PC and a GPU might be enough to solve some of the largest problems

A new method significantly reduces the amount of memory needed for brain simulations, freeing some AI models from the need for a supercomputer.
Written by Daphne Leprince-Ringuet, Contributor
james-knight-and-thomas-nowotny.jpg

James Knight and Thomas Nowotny both worked on developing the method behind procedural connectivity.

Image: The University of Sussex.

Armed with a single turbocharged GPU, a team of researchers have successfully simulated part of a monkey's brain – something that normally requires a powerful and expensive supercomputer, but which the scientists maintain can now be carried out on a desktop PC. 

The experiment, which involves simulating millions of neurons, as well as the billions of connections between those neurons, was performed by researchers from the University of Sussex using a single desktop PC and latest-generation Graphical Processing Units (GPUs).  

While GPUs have long been leveraged to accelerate computation processes for AI models, running a model of this size on hardware that could be found in most gamers' bedrooms is a first. Using a new method they developed, the researchers effectively created a model of a macaque's visual cortex, complete with billions of synapses, which previously could only be simulated on a supercomputer.  

Most brain simulations of this kind require the huge amounts of memory that are provided by supercomputer systems. But the scientists developed a more efficient technique called "procedural connectivity" to drastically reduce the amount of data that needs to be stored to carry out the simulation. The research was published in Nature Computational Science

SEE: Tableau business analytics platform: A cheat sheet (free PDF download) (TechRepublic)

Modelling a brain typically requires a spiking neural network – a special class of AI systems that mimics the behavior of the brain, in that neuron models communicate by sequences of spikes.  

To accurately predict how the spikes will affect neurons, the information describing which neurons are connected by synapses – and how strongly – is usually generated and stored before the simulation is run. Since neurons only spike periodically, however, it is highly ineffective to constantly keep such vast amounts of data in memory. 

On the other hand, procedural connectivity enables the researchers to generate data about neuron connectivity on the fly, and only when needed, instead of storing and retrieving information from memory. This entirely removes the need to store connectivity data in memory. 

"These experiments normally require that you generate all the connectivity data upfront and fill your memory with it, and our method is about avoiding that process," James Knight, research fellow in computer science at the University of Sussex, who co-authored the research, tells ZDNet. 

"Using our approach, details of a connection get re-generated each time a spike is emitted by a neuron," he continues. "We use the power of the GPU to re-compute the connection on the fly, whenever a spike is emitted." 

Using the GPU's large amount of computational power, therefore, the spiking neural network can "procedurally" generate connectivity data on the go, as neuron spikes are triggered. 

The method builds on research initially proposed by US researcher Eugene Izhikevich in 2006, but computers at the time were too slow for the idea to be widely applicable. Modern-day GPUs, however, can boast about 2,000 times the computing power available 15 years ago, and according to Knight, are a "perfect fit" for spiking neural networks. 

In fact, not only did the researchers' results match those obtained by advanced supercomputers, but they were reached faster. It took 8.4 minutes for the model to simulate each biological second in the resting state – up to 35% less time than previous supercomputer simulations, such as the one that was run on an IBM Blue Gene/Q supercomputer in 2018. 

As Knight explains, this is because IBM's device was made up of 1,000 compute nodes networked together in a room. "However sophisticated the system is, there is still some latency between the nodes," says the scientist. "The further you spread out your model, the slower it's going to be. Our model can be orders of magnitude faster." 

SEE: The algorithms are watching us, but who is watching the algorithms?

On top of increasing the speed of experiments, the researchers hope that the new method will lead to more scientific discoveries by lowering the barrier for entry to the hardware that is necessary to compute large problems. Especially in the field of brain simulation, the size of models can quickly reach mind-boggling dimensions requiring terabytes of data; and yet access to supercomputers is only the privilege of a few research teams. 

Knight and his team's method could let neuroscience and AI researchers simulate brain circuits on their local workstations, but also enable people outside academia to turn their gaming PC into a computer that can run large neural networks. "One of the Nature Computational Science reviewers was tasked with reproducing the work and trying it on their own computer," says Knight. "So, if you have a computer and a suitable GPU, you can check the paper for instructions on how to reproduce it." 

Procedural connectivity, of course, is particularly suited to the spiking neural networks that are used for brain simulation experiments, but Knight is confident that more AI applications will emerge as brain-inspired machine learning gathers pace. Whether to chart the behavior of mammals' brains, or develop better speech recognition tools, spiking neural networks are increasingly gaining the attention of scientists and businessmen alike – and now with the right GPU, those next-generation technologies could get started directly in the bedroom. 

Editorial standards