In 1984, the New England Journal of Medicine (NEJM) began requiring authors of research papers to disclose financial relationships with the pharmaceutical or device industry. The policy was controversial then, and even a decade later still faced criticism, with noted scholars charging that it “thwarts the principle that a work should be judged solely on its merits.”

Though every respectable scientific journal has now adopted such a policy, this critical view has received new attention in a three-part series on conflicts of interest appearing in NEJM, again asking whether disclosures “foster an ad hominem approach to evaluating science.” Three former NEJM editors fired back in the pages of The BMJ, calling this a “seriously flawed and inflammatory attack” on a longstanding consensus about conflicts of interest. Who is right?

It is tempting to answer with heated rhetoric about “industry greed” and the “taint” of industry money, or with a romantic account of the “life-saving innovations” funded by industry. But both arguments miss the point. The question should be simply whether disclosures will support physician decision-making and ultimately enhance patient health outcomes. And more fundamentally, whether there are better models for funding science.

The Power Of Disclosure

Fortunately, in the intervening decades, we have come to understand two key facts, which inform this debate. First, we have accumulated evidence about whether and how disclosures work to inform decision-makers.

My own research with Aaron Kesselheim, Susannah Rose, and colleagues has shown that when physicians see industry funding-disclosures on biomedical journal abstracts, the physicians take the research with a grain of salt and are hesitant to prescribe on the basis of such studies, even if they otherwise appear to be methodologically rigorous. Disclosures may be most powerful when it is clear that they are mandatory—as in a journal policy—rather than voluntarily provided as a measure of good will.

Second, systematic reviews have found that industry-funding of science may make it less reliable. In particular, a 2012 Cochrane review of 48 papers found that industry sponsored studies more often had favorable efficacy and harms results, compared to independently-funded research. While there may be benign theories for that disparity, another is harder to dismiss: the conclusions of industry sponsored studies more often failed to agree with the quantitative results reported in those very same studies.

The Cochrane review suggested that “industry sponsored studies are biased in favor of the sponsor’s products.” If this is true, then physician-readers may be quite reasonable to harbor skepticism about industry-funded research. This is not to say that industry-funded research is junk; only that it appears to be biased in the aggregate and should be evaluated accordingly.

Still, this notion is deeply subversive, because so much of what we think that we know about drugs and devices is based on industry-funded research. More than a decade ago, it could be said that “70 percent of the money for clinical drug trials in the United States comes from industry.” Since then, as the National Institutes of Health (NIH) funding has failed to keep pace with inflation, industry-funded research necessarily fills the pages of all the major biomedical journals, and thereby shapes the standard of care for medicine.

Indeed, when we published the randomized study showing that physicians discount industry-funded science, NEJM published an editorial highlighting the study, but with the title “Believe the Data.” The editor, Jeffrey Drazen, argued that physicians were wrong to discount industry-funded research; they could rely on the high editorial standards of the journal and the peer review process to quash any bias.

Without peer review—and other reforms such as trial registration and open data policies—the problem would be much worse. But there is no reason to believe that we can rest on those laurels. After all, the Cochrane analysis covered peer-reviewed articles, and found bias nonetheless.

The Mechanisms For Bias

The peer review process begins only after a study is complete. Yet, long before reviewers see a paper, scientists have already made innumerable discretionary choices including:

  • what population is recruited and how;
  • who is excluded or allowed to drop out;
  • what dose is tested;
  • how and by whom it is administered;
  • what outcomes will be tested;
  • when and how they are measured by whom;
  • what other data is collected;
  • whether and how blinding is applied to patients, treaters, raters, and analysts;
  • whether the success of blinding is evaluated and, if so, reported;
  • which and how many statistical tests and models are used;
  • which findings are reported in the body, in the supplemental materials, or not at all;
  • and ultimately whether, when, and where to submit the paper for publication.

In each of these discretionary choices, the scientist has a rational incentive to select the option that is most likely to produce favorable results for the sponsor and thereby lay the foundation for subsequent financial support. Even if a peer reviewer notices one such choice and imagines a better alternative that could have been made instead, only rarely will that observation make the work unpublishable in the peer-reviewed literature.

And if the paper is on an important ground-breaking topic, it may still be quickly published in a top journal. On the basis of these and other problems, John Ioannidis has provocatively argued that, “most published research findings are false.”

Admittedly, aside from industry funding, there are other vectors of bias. Psychologically, we tend to do things in ways that confirm our prior beliefs. On the other hand, scientists have a rational interest in producing papers that will make a big splash. Some may even have ideological commitments that influence the design of research. Still, changing the subject to these other biases does not solve the problem of commercial bias.

What’s Next?

To address this problem, we should take two important steps:

Strengthen Disclosure Policies

First, we need to strengthen, not weaken, disclosure policies. One problem is that the physicians who rely upon biomedical science abstracts to inform their practices may never see the long detailed disclosures that appear in fine print at the end of the article.

To have the needed effect on physician decision-making, these disclosures need to be summarized and incorporated into the abstracts themselves, as a few journals have begun doing. If other journals fail to rise to this challenge, physician-readers a may have to find technological solutions or information-aggregators, who can make such disclosures more easily accessible.  I am working with a team of Harvard and Massachusetts Institute of Technology (MIT) software developers to do exactly that.

Explore Alternative Funding Mechanisms

We also need to solve the underlying bias problem, to restore trust in science, and thereby ensure that truly valuable innovations can gain the proof and quick uptake they deserve. For this purpose, we need to explore alternative funding mechanisms for biomedical science. When one steps back from our current practices, it should appear rather odd that we rely on companies to test the safety and efficacy of their own products. It would be as if a litigant were allowed to choose and fund its own judge, or an athlete to hire her own referee.

Somebody has to pay for the science, and those costs will ultimately be borne by patients and taxpayers, regardless of how the money is routed in the short run. This recognition creates some space for innovation, and there are a range of proposals helpfully summarized by Marc Rodwin. The most profound reform would be one where a federal agency (whether NIH, the Food and Drug Administration (FDA), or a new one) actually performs the clinical trials, or contracts with independent scientists to do so.

The funding model could be based on general federal revenues (like the NIH’s current budget), or a special tax on the industry (as in the Affordable Care Act), or a “user fee” model tied to sales or FDA submissions (as in the Prescription Drug User Fee Act (PDUFA), which has used industry fees to accelerate FDA review times).

Under the easiest reform, companies would still pay for individual scientific studies to test specified hypotheses about their own products, but the investigators would be selected by an intermediary and allowed to design and conduct the most rigorous study.

Responding to the demands of their physician readers, the premier biomedical journals could make such a change immediately (especially if they acted collectively), or to protect patients, the FDA could similarly begin requiring such independent science, to be distinguished from outright marketing.

Any of these models would help ensure that the discretionary scientific decisions are made in ways that serve health. If it turns out that a product really is safe and effective, then the science will incidentally also serve company profits.

The relationship between money and health is contingent. Rather than backtracking on the regulation of that relationship, we can move ahead wisely to align those goals more reliably.