Garbage in, garbage out: Data science, meet evidence-based medicine

Did you ever wonder how data is used in the medical industry? The picture that emerges by talking to the experts leaves a lot to be desired.

Data science and analytics -- in other words, the art and science of managing and using data to derive insights -- is something you probably have at least a passing acquaintance with. Their applications are transforming practically every domain of human activity, and they are core topics of our column.

latest developments

Coronavirus: Business and technology in a pandemic

From cancelled conferences to disrupted supply chains, not a corner of the global economy is immune to the spread of COVID-19.

Read More

Evidence-based medicine, on the other hand, is something most people have probably not heard of before. We had not either, until recently. For some of us, COVID-19 was a trigger to venture into previously unknown territory, such as epidemiology and medicine.

We have found that applying the faculty of critical thinking and the principles of data-driven decision making can go a long way towards a more informed assessment of the situation. The fact remains, however, we are not experts in epidemiology or medicine.

In a series of two articles, we converse with the experts on everything from how medical data is generated and managed, to predictive modeling, transparency and bias, COVID-19, and the role of industry and the World Health Organization.

From data and evidence to facts

As the COVID-19 situation unfolds, the amount of information pertaining to it is exploding. And it's not just the amount of information that is exploding, but also the amount of attention it attracts. As a consequence, misinformation around COVID-19 has been spreading as well.

The World Health Organization has called this phenomenon an "infodemic." This, in turn, has triggered fact-checkers and regulators, as well as funders supporting researchers to explore how best to counter the spread of COVID-19 misinformation.

Some, like Timothy Caulfield, are quite vocal and aggressive about it. Others, like Critica's founders, call for a less polemic approach. Critica is a small NGO started by Jack and Sara Gorman after they wrote their bookDenying to the Grave: Why We Ignore the Facts That Will Save Us.

David Scales is Critica's chief medical officer and an assistant professor of Medicine at Weill Cornell Medical College. Scales specialized in internal medicine and has a Ph.D. in sociology, with a particular interest in the sociology of science.

franki-chamaki-682112-unsplash.jpg

Data and analytics are transforming every domain of human activity, and medicine is no exception.

Photo by Franki Chamaki on Unsplash

Scales defined Critica's mission as trying to help ensure that scientific consensus is what guides decision making -- public and policy decision making as well. He went on to add that scientific consensus is something that can be contested, but that is the type of contesting that should go on among experts: "Consensus should be revised, but that consensus is what should guide our behavior as non-experts and policymakers."

Case in point, the Cochrane controversy. This issue, little known beyond the confines of medical circles, does more than exemplify how data science relates to medicine via aspects of data governance, methodology, and bias. Cochrane was the touchpoint that connected us with Scales, who noted that the data science audience is of great interest to Critica.

Critica focuses on what is it that constitutes evidence, and how data and evidence become facts. These may sound like philosophical questions, but they have very real implications. So let's briefly review a couple of key concepts -- evidence-based medicine, and randomized controlled trials -- and see how they relate to data science and its practices.

Cochrane, the Google of evidence-based medicine

Scales described evidence-based medicine (EBM) as a social movement that congealed in 1992 with the publication of an article recommending it. He went on to add that using the best evidence possible to make clinical decisions seems obvious, so it's important to know what evidence-based medicine was arguing for and arguing against.

Before the EBM movement, Scales said, many decisions were essentially made by deferring to the most eminent professor in the medical college.

"That person was probably a very wise physician, but was making decisions based on their experience based on their clinical judgment," Scaled said. "I'm not saying this was bad. It might have been fantastic, but the thing is, there was no evidence to support that their decisions were good. Or, who knew if they were even biased against particular sub-populations?"

This process was based on internalized knowledge, but without necessarily being transparent. That started the EBM movement, and today, the movement has gained a lot of momentum. Of course, EBM is not without its woes. As Scales put it, medical professionals who subscribe to EBM need to know what is the best evidence that can answer their questions.

cochrane.jpg

Cochrane is the Google of evidence-based medicine.

"You know, it's a difficult thing to make every decision based on evidence. And if I have to make every decision and go to Medline and do a search to weed through all of the clinical data, to help me guide my decisions... there's just so many questions and so much evidence, so many papers out there. It's impossible," Scales said.

Scaled added: "Cochrane is an organization, it's a consortium. It's changed over time, but you can think of it as a loose network of scientists and physicians who carry high standards for what constitutes quality evidence, who then do systematic reviews of various different questions in medicine, and publish those reviews as Cochrane Reviews."

Cochrane is the closest thing to a shortcut to finding answers -- like the Google of EBM. Scales described Cochrane Reviews as trying to pull in what is considered to be the highest quality evidence, and that's usually randomized controlled trials. That's what makes it into Cochrane Reviews, and those randomized controlled trials get assessed for their quality as well.

Randomized controlled trials

But what is a randomized controlled trial (RCT)? Scales explained it by breaking down the terms. Randomized refers to trying to compare a group that receives some treatment to a control group that either does not receive the treatment or receives a placebo:

"It's controlled in the sense that it has that control group, and people are randomized to one of those conditions or the other. The idea behind it is that randomization helps make the study more generalizable, by helping distribute equally and randomly any potential confounders.

If you want to test a medication, then it's a very powerful way to test if it works, because there's a lot of things that go with giving a medication. If you do a trial that does not have a control group with a placebo, for example, it's really easy to say that the medication works, when maybe people would have gotten better anyway.

So, you have to ask yourself: Does giving the medication actually help reduce the amount of time, reduce the severity? These are the types of questions that you can answer with a RCT. People often also talk about blinding -- randomized, double blind placebo controlled trials. This increases the quality, because even the people in the study can be biased.

So, if you blind them, if you shield the people in the study to the conditions, so that even the doctors giving the medications don't know if the patient is getting the treatment or the placebo, then that also helps reduce the confounding and makes it so that we can trust the results more."

graphic-rct-diagram-1-hires-wtitle.png

Randomized Control Trials are the most commonly used method for evidence-based medicine, and they are also used to measure interventions in population groups in other disciplines.

If you have a fair degree of familiarity with data science, you must have spotted the similarity here. This is how data scientists work, too. First by trying to break down parameters that influence an outcome, and then by taking them each in isolation and trying to see what happens when the parameter changes. This looks a lot like A/B testing, too.

That all sounds like a very solid and scientific approach to conducting medicine. And since Cochrane is built around these principles, you may be wondering: What could be controversial around that? Hint: The usual -- methodological issues and bias.

The Cochrane controversy: garbage in, garbage out

Scales has written an analysis of what he dubs the Cochrane controversy. Cochrane relies on what is widely seen as the highest-quality evidence: RCTs published in peer-reviewed journals. However, some people argue that RCT data are often biased, both in individual RCT instances and in the fact that most of them come from industry-funded sources.

As Scales explained, this is a long-standing internal debate at Cochrane. The name that's most associated with this debate is Peter Gøtzsche. Gøtzsche was a member of Cochrane for a long time, and Scales described him as one of the pioneers of EBM. Gøtzsche has argued in a number of his books that there can be a lot of bias in RCTs:

"As you can imagine, if you're a pharmaceutical company and you're trying to show that your drug works, there's a number of slight tricks that you can do that are within the rules of RCTs, but are handicapping your drug to try to make it look better. Sometimes, it's whether or not and how you do the blinding. Or other things, such as what control you pick," Scales said.

Scaled added: "Since RCTs have become kind of the top of the pyramid of evidence, there are many interests that have a lot of money to gain by, I wouldn't say falsifying, because they're doing these studies honestly, but just putting a little bit of a finger on the scale to try to make sure that whatever it is that they're working on, comes up with positive results."

cochrane.jpg

Peter Gøtzsche has harshly criticized Cochrane's integrity, resulting in his expulsion from Cochrane in 2018, after having been elected to the Governing Board in 2017.

This, Scales added, is the kind of thing that Gøtzsche has railed against. Gøtzsche noticed that the proportion of RCTs out there and the evidence supported by pharmaceutical or other kinds of private interests has been growing, to the point where he argues that much of the evidence that Cochrane ends up using is essentially coming from what could very easily be biased sources.

Cochrane ostensibly just takes the evidence that is out there, does reviews, and draws conclusions. But if the evidence is biased, Scales noted, then, we have a case of garbage in, garbage out. This is a well-known principle in data science -- insights can only be as good as the data used to derive them.

"People like Peter Gøtzsche are worried that if the best evidence we have is tainted by bias, then it's possibly garbage. And then Cochrane is possibly only taking that garbage and putting it through their machine of a non-biased systematic review, and therefore cleaning the garbage and making it look like it's perhaps better than it is," Scales said.

Cleaning the garbage

That sounds problematic. So, the multi-billion-dollar question is: Can anything be done about it? Scales said he doesn't think there's a single person at Cochrane who doesn't recognize this is a potential issue. It's more a question of what to do about it, and a consensus does not exist. Scales referred to data science experience to establish that trying to weed out bias out of data is extremely difficult.

"You essentially have to pick which biases you want. Or at least try to be as transparent as possible about what biases might be there, or make the data and metadata as transparent as possible, so other people can look through it to decide what the biases are," Scales said.

There is several suggested solutions along these lines, Scales added. Some people suggest that more public money be put into RCTs because they're essentially a public good. A way to reduce bias is to make sure that non-biased studies are set up. Using public money to do those studies could help ensure there's not one particular interest being represented.

Others point out the fact RCTs can be enormous multi-year undertakings that get summarized in what's often an eight-page journal article. Many important details and potential biases are being left out. Registries hosting all of the information from these trials would enable digging into the weeds and deciding whether there are any additional biases from the original raw data.

anonymous-picks-up-garbage-to-protest-japanese-download-laws.jpg

Garbage in, garbage out in data science means your insights can only be as good as your data. If the data is biased, insights will follow suit.

Last but not least, others are even suggesting revising what we consider as evidence. The answer might not necessarily be to double down on RCTs as much as to recognize when RCTs make sense, and when some other type of evidence gathering needs to be done. For example, in complex interventions where it's impossible to control for all of the potential confounders.

Medicine may have made strides towards becoming evidence-based, but even evidence-based medicine seems to carry the luggage that comes with being data-driven: Results are only as good as the raw data and the methodologies used to interpret them, and bias creeps in. Could more training in data management be beneficial for people working in medicine?

Scales thinks there should be: 

"Enough training to know what we don't know. A lot of times what I see is, we don't know what we don't know. A lot of physicians are getting an education, a lot of scientists are getting an education, but there is an assumption that, oh, if we just take a couple of courses in statistics, then we can do our own statistics for these papers.

And we're often applying statistical tests that we shouldn't be applying in certain situations, not digging deeply enough into the data to be able to describe what biases are there. That's sometimes beyond our expertise, but we need to be collaborating with people who can help us do that. Because otherwise, garbage in, garbage out, and a lot of what ends up in medical journals can sometimes be of poor quality." 

In the second part of the article, we will discuss data provenance, metadata, and context for medical data, as well as predictive modeling, COVID-19 and commercial influence in health, and the role of the World Health Organization.