X
Innovation

Is the scientific method seriously flawed?

The scientific method---the use of experiments to observe and test hypotheses---may be under fire due to the decline effect. This decline effect dictates that experiments used to find the truth often lose their luster and ability to be replicated over time.
Written by Larry Dignan, Contributor

The scientific method---the use of experiments to observe and test hypotheses may be under fire due to the decline effect. This decline effect dictates that experiments used to find the truth often lose their luster and ability to be replicated over time.

A New Yorker report outlines the conundrum:

The test of replicability, as it’s known, is the foundation of modern research. It’s a safeguard for the creep of subjectivity. But now all sorts of well-established, multiply confirmed findings have started to look increasingly uncertain. It’s as if our facts are losing their truth. This phenomenon doesn’t yet have an official name, but it’s occurring across a wide range of fields, from psychology to ecology.

In medicine, the effectiveness of antipsychotic meds are called into question as are cardiac stents and Vitamin E. Facts are eroding quickly. One analysis will show that the efficacy of antidepressants has gone down as much as threefold in recent decades.

According to the New Yorker report---read in full via Amazon's Kindle---numerous fields are suffering from the decline effect. The New Yorker highlights the following issues with the scientific method.

  • Replicating an experiment and getting the exact same findings is difficult. Why? Regression to the mean. As an experiment is repeated statistical flukes get tossed out.
  • The peer review process is flawed. Peer review is ultimately tilted to positive results.
  • Publication bias. Journals and scientists aim for being statistically significant and this leads everyone aiming for positive results. We don't want to see a null result. Researchers are "significance chasing," or interpreting data so it passes the statistical test of significance.
  • Money. For instance, pharmaceutical companies have little interest in publishing results that aren't favorable. Validating a hypothesis is all the more gratifying if there's financial gain to be made.
  • Selective reporting. The New Yorker notes that selective reporting isn't fraud, it's just that researchers may make subtle omissions and misperceptions as they try and explain their results. One example cited was the testing of acupuncture. In the West, acupuncture effectiveness is questioned. Not surprisingly, studies so acupuncture’s effectiveness isn't all that great. In the East, the effectiveness is deemed higher. Scientists look for ways to confirm their preferred hypothesis.

Add it up and researchers are seeing what they want to see. The New Yorker take makes sense---humans hate being wrong.

So what's the fix? A few scientists in the New Yorker argue that more rigorous data collection can be a fix. Experiments are often poorly designed. In addition, an open source database where researchers could detail what data they were collecting and goals could head off the decline effect---at least a bit.

This post was originally published on Smartplanet.com

Editorial standards