Parallels have been drawn between the human brain and the computer since technology's earliest days. One day, however, computing could be used to help brains damaged by traumatic events like a stroke to work once again.
Like a computer, the brain requires huge numbers of connections to work, allowing messages to be passed from one part of the brain to another, or from the brain to the body. If any of those connections are blocked or broken, the messages can't get through. In the case of spinal cord injury, messages from the brain to the muscles of the limb might be cut off, leaving paralysis. In the case of stroke, if the language production center of the brain can't talk to the part that forms speech, the person can be left unable to talk.
The Center for Sensorimotor Neural Engineering (CSNE), based in the US and funded by the country's National Science Foundation, is developing a mixture of homegrown machine learning software and off-the-shelf hardware that could, in the future, be used to restore limb function to those with brain or spinal cord injury.
Often in the past, researchers focused on trying to tackle the problem of limb paralysis by creating robotic hands or other prostheses that a patient could control using the electrical signals made by their brain. The CSNE is instead hoping to use technology as a bridge between different parts of the nervous system that have become disconnected, enabling those parts that have lost function to become active once again.
"We're designing devices called brain-computer interfaces. These are implantable devices that can be used for reconnecting parts of the brain and nervous system that have become damaged or disconnected due to injury or other neurological disease... We're pursuing a non-traditional approach, which is directly building devices that could enhance rehabilitation and allow paralysed limbs to be reanimated," Rajesh Rao, director of the CSNE, told ZDNet.
The devices could either collect information from one part of the brain, process it, and convey to another using a single integrated chip, or transmit the data wirelessly to an external device where an AI can deal with it before passing it on to the spinal cord, which will in turn translate it into signals to the person's muscles. Alternatively, if the nature of their injury required a system with more computational power, the device could be stored in another space in their body -- in the chest cavity, for example, with wires running from electrodes in the brain under the skin to the device.
"The bottom line is, depending on patient's requirements and needs, we would have different amounts of computation and algorithmic sophistication in the software and machine learning," Rao said.
To get disconnected parts of the brain or spinal cord talking once again, two things need to happen: first, the brain signals that indicate a particular thought -- the intention to pick up a particular object, say, as well as the instructions that tells the muscles to do just that -- need to be decoded and translated into signals that the hardware device can understand. Secondly, those signals need to be re-encoded from machine signals back to human signals, and then transmitted to the right part of the body, be it another part of the brain, the spinal cord, or a muscle somewhere.
The CSNE, which brings together researchers from the University of Washington, MIT, and San Diego State University, is working on both sides of the problem through its bidirectional brain-computer interface, ultimately aiming of restoring function to paralysed arms and hands.
"We have students and researchers that are working on extracting movement signals from the brain's cortical region and, using those signals, we're able to then stimulate particular areas of spinal cord. We're looking at how we can use that connection from the brain to the spinal cord to allow a person to regain voluntary control of their hand that might have been paralysed," Rao said.
While it may be possible to do that by stimulating muscles in the limb directly, Rao said the approach can lead to the muscles becoming fatigued. By targeting the spinal cord and the brain, the center is hoping to avoid that fatigue and recreate the body's own neural pathways.
The process of learning to translate brain signals into computer signals begins with human patients concentrating on moving a cursor on a screen while thinking about making a certain movement. Both the human brain and machine learning systems will learn to adapt in order to better control the cursor.
The human brain notices the correlation between its own activity, and how modulating that activity causes different results. "The brain gets better and better at generating the appropriate kind of signal. Every time you imagine moving your hand you might see an increase in a particular area of brain activity, so we'll use that increase to make the cursor to go up. As the person notices that as they imagine moving their hand, the cursor moves up a little bit, the brain starts to amplify the activity -- eventually the person is close to 100 percent accurate at hitting the target on the computer screen," Rao said. And the brain activity in a paralysed individual that's thinking of moving their hand will eventually become stronger than a non-paralysed individual physically making the same movement.
"The brain is extremely adaptive -- even if you're not perfect in decoding [in order] to determine the exact intention, the brain will learn, like you were learning any motor skill, like typing, or riding a bicycle. The brain can learn to control a physical device or its own stimulation," Rao added.
And so too can the center's algorithms, using a co-adaptive approach -- that is, as the brain makes changes to itself, the machine learning software does similarly.
"We have to be careful, we don't want the brain and machine or computer interface to be at odds with each other; they have to be working in sync and cooperating with each other," Rao said. For example, the center's system is arranged so that brain and machine learning adapt in turn, so they can take account of any changes in the other before they change themselves.
The center is also working on brain-inspired techniques like reinforcement learning -- making sure the brain-computer interface is giving the patient the reward they want and expect. "The brain computer interface is monitoring whether what it's doing is helping the user achieve that user's goal, and then adapting only if it's in sync with the user. That's the co-adaptive brain computer interface we're trying to build," Rao added.
Once the information on what movement the human is trying to make has been gathered, it needs to be fed back to the neurons of spinal cord in a way they can understand and pass on. Here, too, the center is inspired by neuroscience discoveries. By learning how information about a particular movement is indicated by certain signals in a normal brain and spinal cord, the center can learn to replicate them and deliver them back into the patient's nervous system in a way that mimics their own biology.
There are several different technologies that could potentially be used to collect and retransmit the signals: some researchers believe optogenetics -- stimulating the brain through light -- could be the way forward; others favour magnetic stimulation through magnetic nanoparticles or transcranial magnetic stimulation, or even through focused ultrasound.
The center however is backing a more familiar technology: the humble electrode. According to Rao, electrodes can be used to both record signals and stimulate the brain, and two patients have already had strips of electrodes implanted in their brain to record signals that the center gathers and uses for research purposes. It has already shown which brain activity correlates with which hand grips, such as those used to hold a pen, a glass, or a briefcase.
But while the electrode may be common, it's not commonly used in the human brain. CSNE researchers are looking at some adaptations to the electrode to allow it to be left in the human body for years.
One group at the center is experimenting with glassy carbon electrodes, another with making electrodes more flexible so they're more compatible with biological tissue, and a third is looking into reducing the electrode's footprint. "You might have a probe that is inserted into the nervous tissue and then, once the scar tissue forms, other tinier electrodes will pierce through and be able to record and not cause the formation of additional scar tissue," Rao said. (Unlike in muscle tissue, scar tissue in the brain will block new neurons growing through it, making it even harder to convey neural signals through a damaged brain tissue.)
"There's some work [on electrodes], but it's not the main area of emphasis for the center. We're looking at the whole system, the software and using commercially available hardware. Our focus is understanding the principles of bi-directional brain-computer interfaces," Rao told ZDNet.
The system may not need to be left in the body permanently, however. Thanks to Hebbs theory of neuroplasticity, summarised neatly as 'neurons that fire together, wire together', the artificial connections made by the CSNE could eventually enable the brain to grow its own biological connections. Ultimately, that could mean that patients could recover to the degree that the device could one day be removed.
"Ideally we would want this device to be completely self-contained. If you're helping someone with stroke, this [device] could be recording from one location in the brain and stimulating another location in the brain. If it works successfully, we may even want to take that out if they've been rehabilitated to a large degree."
With any sort of technologies that can give the human body or brain abilities it didn't have before, it leads to thoughts of whether that same technology could be used to augment the abilities of people without injuries, as well as those with them.
"We're already doing that with our smartphones, we're already augmenting our brains -- and in the future could we access the internet directly through brain signals as opposed to using our hands. There are people that have thought of that, but we're very, very far away from that particular scenario because we still don't really understand the brain to the degree that would allow us to interact with information in that abstract sense," Rao said.
The CSNE however is preparing for all eventualities: its neuroethics team has been set up to look at the implications that the technology could have on those who have it implanted, and on society at large in areas such as privacy and identity. The work of the neuroethics team will be factored into the design of the device at its earliest stages, Rao said.
Still, the center has a long road ahead before it will have a fully-fledged system that can be implanted into patients. "I'm optimistic that in hopefully less than 10 years we can start doing some human trials, starting to integrate brain signals with stimulators within the brain for stroke or spinal cord injury patients. We're hoping we can start running bidirectional interfaces in human trials in the next 10 years," Rao concluded.