X
Innovation

Elon Musk's Neuralink explained: Get ready to merge your mind with a computer

Elon Musk's secretive startup has promised to connect humans and machines. Here's a look at how it might work - and the barriers it will need to overcome.
Written by Jo Best, Contributor

Neuralink is two-year-old startup that's yet to release a single product, but which is promising to build the technology to connect up human brains with computers.

If it was run by anyone other than Elon Musk, no one would give it a second look. But, with the entrepreneur behind Tesla and Hyperloop at its helm, could Neuralink be really about to deliver a way to merge humans and machines?

Neuralink's stated goal is to develop ultra-high bandwidth brain computer interfaces to connect humans and machines. The job listings -- which are pretty much the only thing you'll find on the company's sparse website -- give you a flavour of the challenge that Neuralink, which recently reportedly raised $39m in new funding, will face.

SEE: How to implement AI and machine learning (ZDNet special report) | Download the report as a PDF (TechRepublic)

For example, the company is looking for an engineer to work on the development of materials and processing methods that "do not currently exist" as part of its plan to develop high-density, reliable neural interfaces that are at the ground level of brain computer linking.  

"These materials directly interact with tissue to pick up and send brain signals, consequently, this work is a key component in the final product," says the job ad with a certain amount of understatement. Another of Neuralink's vacancies is for an optical engineer to develop custom optics and imaging systems used directly in its surgical robot (after all, what else would install a brain computer interface apart from a surgical robot?).

In truth, the idea of brain computer interfaces (BCI) isn't new: in recent decades, computing has increased in power while the complexity and use of medical technologies like EEG and MRI have increasingly revealed the physical workings of the thinking brain. As a result, the possibility of directly connecting a human brain to computers in the outside world became less sci-fi dream and more realistic tech prospect.

The quest to build a functioning brain computer interface has spawned numerous separate strands of research over the years, all seeking to find ways to translate human thoughts into computer commands without the need for a physical action.

So far, much of the work on systems that connect brains with electronics have been undertaken with a medical goal in mind. Some of the first brain computer interfaces were neuroprostheses -- systems designed to help those with neurological impairments regain lost function. The first neuroprostheses were designed to replace impaired senses, typically hearing and, more recently, elements of sight.

Other research efforts have focused on restoring lost physical capabilities: for example, creating a brain computer that would allow someone with a spinal injury to grasp an object by controlling a robot arm, or move a cursor using only their brain's electrical signals.

The early part of Neuralink's work may advance on this. The company has suggested the first applications will be aimed at certain types of brain injuries, such as stroke, cancer, and congenital problems (while they've given no indication what type of congenital lesions they're thinking of, cerebral palsy might be a likely candidate). In all three, there are often areas of functional brain separated by islands of brain where the neural cabling has died, often due to a lack of blood supply.

One of the odd things about human nerves is they aren't great at recovering after injury, so that once a connection has been sufficiently severed, it's unlikely to regrow (this is one of the reasons why transplanting limbs has traditionally been a challenge for medicine.) However, if an artificial implant could allow the signal from one functional area of the brain to reach another by leapfrogging the dead area, it could enable people with brain injuries to recover capabilities that have been lost.  

If function can't be restored -- for example, in the case of quadriplegia -- BCIs can still be useful. BCIs could, for example, enable people to control robot avatars of themselves to carry tasks they aren't able to, or effectively use their thoughts to control a web interface.

And those are not the only medical conditions with which brain computer interfaces could help: epilepsy too, has been suggested as a potential target and seems a good one given the condition is a disorder of electrical signalling within the brain, as has Parkinson's, where deep brain stimulation is already helping to tackle some of the symptoms of the disease.

It has also been suggested Neuralink's work could be used to help combat Alzheimer's dementia or the gradual fading of memory that comes with old age. Such conditions present a far more complex challenge for BCI companies: many of the conditions mentioned in connection with Neuralink are those where the problem is chiefly one of signalling (think of it as a condition where messages can't get from point A to point B because there's a break in the road). Alzheimer's dementia and age-related memory loss are due to far larger structural problems in the brain: brain matter itself atrophies, meaning the message can't get from point a to point b because there's just no road, or points to start with, for that matter. (Interestingly, the shrinkage that happens in old age is typically due to under-stimulation -- and what could be more stimulating than having huge chunk of computing infrastructure permanently plugged into your grey matter?)

Medicine may have spurred the genesis of brain computer interfaces, but the field has already expanded outside of healthcare: brains have been used to pilot drones, for example, and could one day end up being the way we control our smart homes.

But building a brain computer interface that allows you to do many things, rather than one that's used for a single application, could prove difficult, thanks to the limitations of both human and machine hardware. One of the greatest challenges that BCIs face is achieving a bidirectional information flow for multiple applications: human can talk to a machine, and the machine can talk back.

Ultimately, Neuralink is thinking much bigger, and going far beyond traditional medical technology. Longer term, the company wants to plumb in human intelligence to artificial intelligence, offering upgrades to everyday brains as a way of "democratising" smartness.

Musk has said before that he believes that AI is one of the greatest threats to humankind, and brain computer interfaces could be one way of tempering that threat – giving us a chance to keep up and even eventually become part of the super-intelligent AI, which is why a high bandwidth link is needed: to enable our minds to eventually communicate at the same pace as an AI.

Musk claimed late last year that Neuralink would unveil the first fruits of its work within months, and that work would "be better than anyone thinks is possible" (though it may be worth highlighting he did make that statement while smoking marijuana on the Joe Rogan radio show).

And when earlier this year he was asked for an update, Musk simply tweeted 'Coming soon'.

Despite Musk's promise of products on the horizon, there will likely be many, many years of work ahead before you'll be telepathically linked to your computer.

Before any new medical tech can hit the market, particularly anything reaching the brain, there will need to be an extensive period of pre-market testing in the lab using animals.

Reports suggest that Neuralink had been looking to build a new HQ with room for rodent testing. Rodents are typically used in neuroscience research due to structural similarities between rat and human brains -- the Human Brain Project, a research effort to model the human brain in computing, has recently completed a model of the mouse brain. While the plans for the new HQ were apparently later ditched, the fact that the company was seeking a location suggests that the work has at least a degree of maturity to warrant testing. The company is reportedly funding primate research at the University of California, though the scope of the work is being kept under wraps.

One of the major issues that Neuralink will have to tackle is that existing brain computer interfaces are unidirectional and single application: in the case of artificial ears, information from the outside world is carried to the brain for the task of hearing; in the case of the thought-controlled robot, the information goes the other way to enable movement. Enabling bidirectional interfaces would prove challenging for both human and machine.

"It's very challenging to do from the machine side and even from the human side. If I want to change a channel on the TV, I have to imagine myself changing the channel, and if I want to close the door, I have to imagine another action within my brain to close the door. Just changing that imagination is very difficult to do quickly and it's very tiring as well -- after 20 minutes you'll feel very fatigued because you're not used to doing that," Dr Marvin Andujar, assistant professor and director of the Neuro-Machine Interaction research lab at the University of South Florida, says.

To be able to control even a universal remote, say, would require humans to undertake a huge shift in thinking. "If we would like to control machines with our brains, it's very difficult for a human to do that right now because we're not trained like that to use our brain in that perspective -- we'd need a lot of training to be able to use that type of technology," he added. It would be particular tricky for older people -- children's neural plasticity, natural creativity and curiosity would mean they'd be likely to adapt to new interfaces far more easily.

While Musk hasn't been forthcoming about the nature of the product(s) that Neuralink is working on, he has discussed the need for a type of 'neural lace' -- a nod to the Iain M Banks Culture novels where brain computer interfaces are common. Neural lace could work as "a digital layer above the cortex... just as your cortex works symbiotically with your limbic system, a third digital layer could work symbiotically with the rest of you," Musk has said.

Musk suggested that neural lace wouldn't necessarily mean brain surgery, and that it could be conveyed to the brain via the body's arteries or veins. There are already drugs that are programmed to travel through the bloodstream, but only activate once they've crossed the blood-brain barrier: it's not so far-fetched that suitable small electronics could also only switch on once they reach the brain.

For now, however, there are two main schools of thought on how to get the brain part and the computer part of the BCI to talk to each other. Under the invasive approach, the skull is opened up and electrodes are implanted onto the surface of the brain; under the non-invasive approach, the electrodes sit on the skull, and there's no surgery required.

Musk may hope for a blood-transmitted system, but it appears that Neuralink is looking towards a good old-fashioned invasive approach. Researchers associated with the company published a paper describing a system that would insert flexible polymer probes into the brain using a robotic insertion device described as a "sewing machine". The system has already been demonstrated on a rat, according to the paper, and used to record outputs from its brain.

However, the brain is a dense piece of matter, housing billions of neurons. For a multi-application brain computer interface to work, the interface would need to potentially access or interpret signals from all of them (certain neurones are thought to be exclusively dedicated to specific individuals including celebrities, for example). Accessing an individual neuron is a problem neuroscientists haven't cracked, let alone technology companies.

SEE: Sensor'd enterprise: IoT, ML, and big data (ZDNet special report) | Download the report as a PDF (TechRepublic)

Pleasingly to our egos but problematically for programmers, each of our brains is unique in its electrical signals, meaning a BCI for one individual might not work for another. "For each person, we have our own unique brainwaves -- it's like our own biometrics. It's going to make it very difficult to make a universal machine," Andujar says.

One interesting and thorny problem about building brain computer interfaces is that the human brain is as much a mystery as the depths of the oceans: while some parts of it have been well mapped, others are still opaque to science. That doesn't necessarily present as great a problem to brain computer interfaces as it might, however: in fact, brain computer interfaces are shedding light on the workings of the black box that is the human brain.

"We're learning more about how to control tools with the brain and we're learning more about how the brain works itself," Dr Jason Connolly, an assistant professor in the Department of Psychology at Durham University working on brain computer interfaces, said. As researchers begin to harness and interpret signals from the prefrontal cortex -- one of the regions of the brain responsible for higher-level cognitions -- and how that translates into actions, "that's also going to back-propagate and teach us about how the prefrontal lobe itself works".

But it's not just a question of better understanding our own wiring, we'll need to get higher powered electronics before BCIs can become commonplace. Ultimately, BCIs will need to be so simple they can be "controlled by a computer, iPad, or anything like that by an everyday person. That's the issue -- basically developing a deep neural network to analyse the data. Now you can do it on a laptop -- it's got to have a very good, very fast GPU, but you can do it. But if you want to make [a BCI] mobile, you want to make it down to the level of a phone or tablet to do the processing, at the moment we're not there yet. As computers move forward, I think that problem will be solved as well," Connolly said.

In the future, brain computer interfaces could ultimately allow the human brain to patch into artificial intelligences and other resources, giving the brain's computing power almost limitless upgrade potential. Would we use it to give ourselves da Vinci-style intelligence, or just upgrade to be the best Fortnite player the world has ever seen? Hopefully patching our brains into computers will not only bring us artificial intelligence, but artificial wisdom with it.

Editorial standards