Special Feature
Part of a ZDNet Special Feature: Coronavirus: Business and technology in a pandemic

Data governance and context for evidence-based medicine: Transparency and bias in COVID-19 times

In the early 90s, evidence-based medicine emerged to make medicine more data-driven. Three decades later, we have more data, but not enough context, or transparency.


Coronavirus and its impact on the enterprise

This TechRepublic Premium ebook compiles the latest on cancelled conferences, cybersecurity attacks, remote work tips, and the impact this pandemic is having on the tech industry.

Read More

Bias, lack of transparency and context, one-size-fits-all approaches. These are some key issues that emerged as we examined the field of medicine with a data science lens, attempting to gain insights into the inner workings of the medical industry.

In the push towards a COVID-19 vaccine, understanding the process through which the medical industry works is paramount to establishing a more informed assessment of the situation. We continue the conversation with David Scales, Critica chief medical officer, assistant professor of medicine at Weill Cornell Medical College, and a PhD in sociology.

Critica is a small NGO aiming to revolutionize the role of science in making rational health decisions. The conversation with Scales touched upon evidence-based medicine (EBM) and randomized controlled trials (RCTs) as the main means via which medical research is conducted, and Cochrane as the main access point for data generated via this process.

Data provenance: Know thy data

A number of people, including Cochrane excommunicate Peter Gøtzsche, argue that there can be a lot of bias in RCTs. This has largely to do with the fact that the vast majority of RCT data come from pharmaceutical companies, creating a conflict of interest. If aggregators like Cochrane do not validate the raw data they offer access to, they may be whitewashing them.

Case in point: Surgisphere. What was initially referred to as the most influential COVID-19 related research up to date was called into question as to the result of lack of transparency regarding the origin and trustworthiness of its data. The research used data sourced from Surgisphere, a startup claiming to operate as a Data Broker, providing access to data from hospitals worldwide.

However, whether that data is veracious, or was acquired transparently is not clear. As a result, research findings were put into question, and related decisions made by the WHO were reverted. Scales' opinion is that researchers have a responsibility to verify the source of the data they use. He noted that this can be challenging, but there needs to be due diligence:

COVID-19 Coronavirus outbreak financial crisis help policy, company and business to survive concept, businessman leader help pushing bar graph falling in economic collapse from COVID-19 virus pathogen

COVID-19 has pushed the limits in many ways, including shedding light on data practices in the medical industry.

Getty Images/iStockphoto

"I don't think it's people can abdicate that responsibility by just saying that they're an aggregator of data. The quality of data is extremely important, knowing the data provenance is extremely important, and your responsibility as a researcher. 

Researchers have to sign a document saying that they have examined the quality of the work that they are submitting to a journal. So it's hard for me to understand how people can sign that document when they're submitting to a journal without doing the due diligence necessary for a lot of the controversies that we're seeing out there about data provenance."

Over-reliance on RCTs may be part of the problem. RCTs can be enormous multi-year undertakings, summarized in what's often an eight-page journal article. Many important details and potential biases are left out. A way to remedy that could be using registries hosting all the information and raw data from these trials.

Some people suggest that more public money should be put into RCTs because they're essentially a public good. A way to reduce bias is to make sure that non-biased studies are set up. Using public money to do those studies could help ensure there's not one particular interest being represented.

Others are even suggesting revising what we consider as evidence. The answer might not necessarily be to double down on RCTs as much as to recognize when RCTs make sense, and when some other type of evidence gathering needs to be done.

Context is key

These suggestions sound interesting. We could not help but noticing, however, that they seem to imply a radical departure from the status quo. Scales concurred, and went on to explain why it's important to go back to the beginning of the EBM movement:

"I'm thinking about where things stood in 1992, where there wasn't necessarily a good framework for thinking through how to make some of these decisions and what evidence to use. We now have evidence hierarchies. I think the problem is that there's been a lot of unintended consequences from setting up those evidence hierarchies, and putting so much weight on RCTs."

Scales noted that he often sees RCTs being used in situations that don't really lend themselves to an RCT. He cited hotspotting, a concept used in crime statistics, as an example. The idea is to use data to pinpoint locations where crime has been the highest. This has been applied to medicine in situations like trying to find places where healthcare spending is the highest.

A group that works in Camden, New Jersey, used hotspotting to put extra resources toward people considered to be the highest utilizers of healthcare. The idea was that putting more resources into helping these people could keep them healthier, and end up costing less.

A labor-intensive and expensive program of targeting these hotspots was created, and an RCT was set up. Some people would get an extra intervention that was trying to help coordinate them to extra social services, others would not. Results showed that there was no difference between the two groups:

"That's one of those things where it's easy to think that, Oh, well, I guess this intervention didn't work. We shouldn't put our money into it. But context is often key. The question we often need to be asking is whether or not a RCT really can control all of the variables.

When providing coordination to other services for patients with a lot of complex social needs, how well that program works is dependent upon the other services that those people are directed to. But there wasn't much in terms of extra services to coordinate them to. Using a RCT was testing no intervention against an intervention that didn't have the firepower to help anybody."

Data governance: Adding metadata and context

Scales went on to add that some people in economics advocate for RCTs in complex social situations have been called "Randomistas," suggesting that this is a political ideology that they're clinging to, despite the fact that there's a lot of confounding variables that can't be controlled for. So people are starting to talk about the "tyranny of the RCT."

The research methodology should fit the question, argued Scales, citing Trisha Greenhalgh at Oxford as someone who "is on the right track, because she talks about other instances where different types of empirical studies are warranted."

In a recent article, Greenhalgh examined public health measures related to COVID-19, asking the question of whether masks, hand washing, social distancing, or wearing eyewear works. Scales thinks it would make a lot of sense to test those in an RCT, but this can't be done while trying to control the spread of a pandemic:


Testing whether predictive measures work during a pandemic is not always feasible, therefore other methods may be called for.

"In this situation, time is of essence. We might be limited in what we can do. Sometimes we need to draw in other types of evidence. We need to bring in some narrative evidence. In a case like this, I think modeling is very important, because that is sometimes the closest approximation we could get to a some sort of trial within the timeframe we would need to be able to implement a lot of these public health measures.

I do a lot of qualitative work. I often talk about how quantitative data is important and can provide a lot of insights and raise a lot of questions, and qualitative data can be used to help extract the context. I think the combination of quantitative and qualitative data is extremely important."

Again, that cross-checks with best practices in data science, or perhaps more precisely in this case, data governance. In data governance parlance, we would call that adding metadata and context to datasets.

Scales agreed that the two need to work hand in hand. And it's unfortunate because right now, there's so much emphasis on the quantitative, that what we're getting is an overabundance of quantitative data without sufficient context, making it hard to see the biases and the challenges and the problems that a lot of these RCTs might have

Predictive models and research parasites

Predictive models resurfaced the issue of transparency. For example, a model used to base many decisions earlier in the course of the pandemic was the one created by Imperial's Neil Ferguson. This model was recently scrutinized on dimensions such as software quality, maintainability, explainability, and transparency, and it scored pretty low on all of those.

People have suggested that RCTs have to do with public health, so they should be publicly funded, and belong in the public domain. Could not that line of reasoning be extended to include predictive models? If decisions affecting public health are made based on models, shouldn't models be open-source, transparent, and open to review?

The incentive for most researchers is to keep things such a model or an RCT private, said Scales because they see that as advancing their career. But people familiar with open source see how it has made things better for everybody. The more transparency there is, the more robust science becomes.


Professor Neil Ferguson of Imperial College. His COVID-19 predictive model has been sharply criticized, leading to a discussion on whether such models should be in the public domain.

This is exemplified by a 2018 research paper called "Many analysts, one data set." What the authors did is they took one dataset and asked 61 different teams to analyze the data. What they got was multiple different types of analyses, 61 different ways of analyzing, and a wide range of results.

But for that to become the norm, a cultural shift is needed. Researchers would need to get credit for secondary analysis. People that create datasets would need to get more credit for them, than for the papers that come with them. There was a debate about this in the New England Journal of Medicine, where the editor in chief called a group of people "research parasites."

His point was that, if someone did a clinical trial, and made the data available publicly immediately, there would be others that would "steal" the data and do analyses before the team that published the data got their just reward by publishing their research. This view did not go down well with the audience, so perhaps something is changing.

COVID-19 and commercial influence in health: From transparency to independence

Critica is not the only one to suggest that a change of paradigm is needed. The BMJ is one of the most respected peer-reviewed medical journals, and a self-proclaimed champion for patient-centered, evidence-based, and independent medicine. The BMJ just published a special issue titled "Commercial influence in health: from transparency to independence."

The issue includes Editorial, Analysis, Research, and Opinion articles by a number of scientists. One article, titled "Commercial influence and COVID-19," is co-authored by BMJ's research editor and focuses on Remdesivir, an antiviral drug made by US company Gilead.

The article elaborates on how Remdesivir, which was unapproved at the start of the pandemic, went to being touted as the "standard of care" for COVID-19. As the article details, published results on Remdesivir were problematic in several ways, including being heavily biased by interference by Gilead.

Jack Gorman MD, Critica president and co-founder, noted that pharmaceutical companies have some of the best scientists in the world, who, if left to their own devices, would tilt toward performing objective science -- but of course they aren't.


Despite taking a turn towards more data-driven practices in the early 90s, transparency in medicine is not a given.

Gorman described how influence starts with the drug company dictating the kinds of drugs they are looking for and making decisions about which new molecules to pursue based on their likely commercial viability. Then there is the problem of a regulatory agency, the FDA, that is overwhelmed because of the inadequate scientific staff of its own.

Gorman noted many articles have been written lately showing that in general the standards the FDA uses to decide on a new drug approval have slipped in recent years. He also pointed out the issue of press releases and marketing and how the media handles that, and preprints and how they are touted and received:

"Remdesivir is indeed a great example of some of this. I don't know its early history (i.e. from the time it was discovered in the laboratory), but while it does seem to be an advance in anti-viral therapeutics for COVID-19, it may not be the "cure" we thought it was initially based on early reports.

Independent scientists and the public should be more involved in setting priorities for drug development and discovery, drugs should cost less, and journalists should be taught how to avoid hyping up stories about potential new therapeutics based on press releases and reprints."

Who can you trust?

Re-posing the multi-billion dollar question then: how can we move forward? Where could this discussion be had, who could move this agenda forward? Perhaps that could be a task for the World Health Organization. Unfortunately, the WHO has its own issues, too. As Scales pointed out, a lot is going on at the WHO that is beyond simply about what evidence it uses:

"The best way that I can describe a lot of the biases that the WHO runs into is..You know, my PhD dissertation was actually looking pretty closely at the WHO. One of the key things that I found interviewing one of my informants there was, he said, "Our clients are our member states."

The WHO functions not necessarily to improve the health of individual people around the world, but to serve its clients, which are its member states. And so a lot of what the WHO does, and a lot of how it reacts is not necessarily based on the best evidence.


The World Health Organization has its own controversies.

It is a highly rational organization, but that rationality is often based on the clout of different member states and what those different member states want. And so the industry has made its way into the WHO, through governments such as the US that promote a lot of collaborations with industry. But this has also created a lot of consternation, which you might have seen.

There's been a number of mechanisms where this has become a dividing line. One of the best examples was in 2005, and for a few years after that, Indonesia refused to share influenza viruses, because the influenza viruses that Indonesia shared with a big global network became patented by Australian pharmaceutical companies. Needless to say, Indonesia was reticent to share things that they might then not be able to afford.

And so they stopped sharing, and it's created several dividing lines that essentially comes down to what is the role of industry in general in a lot of the work that the WHO does. So, it's not just the trials, but it's an everything from how your influenza vaccine gets made to how much the WHO recommends sugar should be in an average person's diet".