X
Innovation

AI that knows you're sick before you do: IBM's five-year plan to remake healthcare

A mix of artificial intelligence and custom silicon could help people diagnose themselves with a range of conditions before they show symptoms.
Written by Jo Best, Contributor
ibm-wafer.jpg

A silicon wafer designed to sort particles found in bodily fluids for the purpose of early disease detection.

Image: IBM Research

A chip that can diagnose a potentially fatal condition faster than the best lab in the country, a camera that can see so deeply into a pill it can tell if its molecular structure has more in common with a real or counterfeit tablet, and a system that can help identify if a patient has a mental illness just from the words they use: IBM is betting that a mix of AI and new hardware can make all three possible within the coming years.

IBM's research labs are already working on turning these concepts into fully-fledged healthcare tools, combining the company's existing machine learning and artificial intelligence systems with newer kit including revamped silicon and millimetre wave phased array sensors.

The latter will be used in 'hyperimaging systems' -- tools that will be able to pick up not only images from the visible light that humans can see, but other parts of the electromagnetic spectrum that we can't.

By combining such information with high-powered cameras and other sensors, hyperimaging could allow clinicians to see into, say, a tablet at a molecular level, to determine whether it's a safe pharmaceutical or one of the medications that make up the multibillion-dollar counterfeit drugs market. "By using not just the traditional light spectrum but those other EMR radiation, we can collapse that into a much more rich image that will give us all kinds of clues we don't get today," Rashik Parmar, technical executive and distinguished engineer at IBM, told ZDNet.

While the hardware needed to bring hyperimaging to the market already exists, more work needs to be done before the systems are ready for rolling out commercially. "Sensors for other parts of the EMR spectrum have been there for some time, the advances are simplifying and miniaturising that, and reducing the cost, then there's the cognitive algorithms that can interpret that and visualize that in a way that makes sense," Parmar added. "We've got some quite Heath Robinson devices at the moment, but we can see how to miniaturise that pretty quickly."

While hyperimaging technology is likely going to make its way onto the market first in self-driving cars -- with AI crunching the data behind the scenes, allowing the vehicles to recognise whether objects nearby are risks to be avoided or just street furniture to be ignored -- it will eventually make its way into medical devices too. For example, hyperimaging could allow dentists to instantly detect whether a tooth is diseased, or augment the information provided by a standard X-ray.

While hyperimaging may be coming to a self-driving car near you soon -- well, in the next five years or so -- and a pharmacist near you a couple of years later than that, there's no reason why even further out we might be able to turn our smartphones to the same task too. Eventually, should the necessary sensors and imaging kit be incorporated into phones, then hyperimaging could be used to help people with allergies and illness like coeliac disease to scan their food for traces of substances that trigger their condition.

Similarly, the next few years could also see IBM using artificial intelligence and new analytical techniques to produce a 'lab on a chip' -- a pocket-sized device that would be able to analyse a single drop of blood or other bodily fluid to find evidence of bacteria, viruses, or elements like proteins that could be indicative of an illness.

"We've worked six or seven years ago on the notion of nanofibres which mimic smell so that you can use to make miniature cantilevers, and based on odours going past that, you can use that to create a virtual nose. If you combine that with other forms of sensor, you can use nanostructures to take a sample of any body fluid -- whether it's saliva, or blood, or a tissue sample from a biopsy -- and you can start to use that to analyse it for potential diseases... you start to bring together digital manufacturing and 3D printing type technologies which will allow us to put sensors in custom designed probes, which would effectively do the analysis and tell us what we're looking at," Parmar said.

Rather than having to wait days or weeks after a blood test for a virus to be cultured enough for it to be identified, these tiny labs on a chip could pick up the smallest traces of the organism.

Perhaps its greatest use, however, could be allowing people to know about health conditions before any symptoms begin to show. Take Alzheimer's disease, for example: the neurobiological changes that cause signs of the disease will have done their work before any of those signs are evident in the patient. By checking a person's blood for biomarkers of the disease at regular intervals, they can be informed if they show early indications of the condition, and start treatment or planning accordingly.

While analysing the contents of a drop of blood at a nanoscale level will need huge AI processing power, the real challenge for IBM in bringing labs on a chip to market is in the silicon.

"These chips are now monitoring down at the 20nm level, that gives you quite a fine-grained view of things, so we can see at the virus level. Getting down to that smaller resolution requires a lot of work."

Mental health, however, is one area where artificial intelligence will chew up vast quantities of data and turn it into useful information for clinicians. Over the next two years, IBM will be creating a prototype of a machine learning system that can help mental health professionals diagnose patients just from the content of their speech.

Speech is already one of the key components that doctors and psychiatrists will use to detect the onset of mental illness, checking for signs including the rate, volume, and choice of words. Now, IBM is hoping that artificial intelligence can do the same, by analyzing what a patient says or writes -- from their consultations with a doctor or the content of their Twitter feeds.

"We've had deep research for many years in understanding the links between words and the implications of using that word in that particular context, and that's helped us build psychometric profiles... the research agenda we're trying to work on from here is to say, the use of the words in that context, does it help to understand the mental insights of that individual?" Parmar said.

IBM already has form with such tools: one of the first commercial uses of Watson, Big Blue's cognitive computing system, was as a doctor's assistant for cancer care. Now the company is working with hospitals and other partners to build prototypes for other cognitive tools in healthcare. For instance, IBM said that Jupiter Medical Center, a regional medical facility in South Florida, will adopt Watson for Oncology. IBM also has cancer training partnerships with MSK.

As well as conditions like schizophrenia, bipolar, or depression, the system could also use textual analytics, as well as data from other sources such as wearable fitness or medical devices, to help with diagnosis of neurological conditions including Parkinson's.

While all those data sources are already available to medical health professionals to help in their diagnosis and treatment, IBM hopes using machine learning will make the process faster and give an additional layer of insight.

"There are various trials that are done in the US and Europe, and professionals share their learning through papers, and you hope that learning filters through, but no one sits back and pulls all of that data together and says 'is there some connection between them, and could we do something smarter by connecting them?'."

Read more about medical and health technology

Editorial standards