X
Health

Google AI is very good at predicting when a patient is going to die

Google takes a 'gobble-it-all' approach to building predictive analytics for patient outcomes.
Written by Liam Tung, Contributing Writer

Video: Google puts AI team's work to good use in Android P.

Having been trained on 46 billion bits of electronic health data from patients, Google's AI is now showing promise in the field of predicting health outcomes for patients.

Researchers from Google Brain and Stanford University recently published a paper in Nature detailing their work using big data and deep-learning methods to predict the fate of inpatients.

The researchers used the algorithms to predict important outcomes, such as death; readmissions to measure quality of care; a patient's length of stay to measure of resource utilization; and a prediction of a patient's diagnoses to see how well clinicians understood a patient's problems.

The team took a different approach to building predictive statistical models by considering a 'representation' of all a patient's health records, including clinical notes, rather than removing most of a patient's information from the analysis.

As noted, 80 percent of the effort in creating an analytic model is in cleaning the data, so it could provide a way to scale up predictive models, assuming the data is available to mine.

They also developed a way to show clinicians what exact data its model "looked at" for each patient it predicted an outcome for.

This technique would allow clinicians to check whether a prediction is based on credible facts and address concerns about so-called 'black-box' methods that don't explain why a prediction has been made.

Google started working on the project with UC San Francisco, the University of Chicago Medicine, and Stanford Medicine last year, which gave them access to a vast trove of de-identified medical records to validate there deep-learning models.

In total, they had access to health records on 216,221 adult patients who were hospitalized for 24 hours or more, which produced over 46 billion data points.

"We demonstrate that deep-learning methods using this representation are capable of accurately predicting multiple medical events from multiple centers without site-specific data harmonization," the researchers note.

As Bloomberg reports, medical experts have been impressed by Google's ability to dig out data from notes on PDFs or handwritten notes on old charts, which previously have been difficult to incorporate into predictive models. Google's system is both faster and more accurate than previous techniques.

The study has created excitement at Google because it may open a new door to the lucrative healthcare market, where it could one day sell AI-as-a-service to time-constrained clinicians.

The research showed that Google's models are better at predicting a range of outcomes and metrics for patients than traditional methods.

On inpatient mortality, for example, it scored 0.95 out of a perfect score of 1.0 compared with traditional methods, which scored 0.86.

In a blogpost, Google downplayed the idea that its AI would replace human clinicians' role in diagnosing patients.

"We emphasize that the model is not diagnosing patients -- it picks up signals about the patient, their treatments and notes written by their clinicians, so the model is more like a good listener than a master diagnostician," the researchers note.

Previous and related coverage

Google Research becomes Google AI to reflect AI-first ambitions

Google Research gets a rebrand to become Google AI. The message: Everything Google does intersects with AI somehow.

Google employee protest: Now Google backs off Pentagon drone AI project

Google won't bid to renew its Project Maven contract with the Pentagon after it expires in 2019.

Google aims to build diversity in AI with new educational tools

The site "Learn with Google AI" features a free course on machine learning that's available to anyone.

Google urges 'fair and responsible' AI development

Aside from ensuring access to machine learning expertise, the tech giant says there needs to be "fair and responsible" development of artificial intelligence to ensure societies can truly benefit from the technology.

Editorial standards