Long before Congress created the Health Information Technology for Economic and Clinical Health (HITECH) Act, giving $32 billion to health care providers to transfer to Electronic Health Records (EHR) vendors, plans for that windfall were created by Health Information Technology (HIT) vendors, HIT enthusiasts, and friendly politicians (like Newt Gingrich).

The plans included an enormous lobbying campaign. Congress responded obediently. Most commentators focus on that $32 billion for the HITECH Act’s incentives and subsidies. But that was only seed money. The real dollars are the trillions providers spent and will spend on the technology and the implementation process.

Basing Policy On Weak Research

Much of the economic justification for the spending on HIT was based on a now-debunked RAND study that promised up to a $100 billion in annual savings. Recently, however, in a remarkable act of ethics and honesty, RAND disclosed its previous study’s problems, dubious data, and weak research design, and that the research was subsidized by two of the larger HIT vendors (Cerner and GE).

Interestingly, in contrast, the Congressional Budget Office (CBO) and the Office of the National Coordinator for Health Information Technology (ONC), both of which touted the first RAND study, have not issued reassessments of their happy predictions but continue to promote HIT’s cost savings and improved patient safety. While HIT should be and absolutely is far better than paper records, more than 30,000 studies had already failed to support such bold assertions of powerful improvements in health and efficiency. Moreover, the research designs of all but a tiny proportion of those studies were too weak to yield trustworthy conclusions. And the best of them showed few if any benefits. This comes to the heart of our concern here: the use of weak research in support of less-than-effective health policies and medical treatments.

Implementation of other federal policies with questionable economic incentives and penalties has also not lived up to expectations. These policies include: paying physicians extra income for things they were already doing (e.g., taking blood pressure), setting up as yet-to-be-proven-effective Accountable Care Organizations to incentivize cost savings, or charging patients with high cholesterol thousands of dollars more for their health insurance premiums through Affordable Care Act–sanctioned wellness programs that do not improve chronic illness.

The common denominator here? Absent or untrustworthy evidence of treatment and policy benefits, ignorance of failures, and possibilities of patient harm. Also, the crude application of economic incentives to change doctor and patient behaviors can backfire (e.g., changing diagnostic codes to maximize revenue or avoiding care for sick and expensive patients).

Given the renewed climb in the nation’s massive health care costs, we, as a society, deserve better. In fact, most studies of a broad array of health policies and treatments do not support cause-and-effect relationships because they suffer from faulty research designs. The result is “flip-flopping” research findings: initial studies suggest dramatic health benefits (e.g., hormone replacement therapy), which are later disproven as better studies are conducted.

Identifying Flawed Study Designs

In a recent US Centers for Disease Control and Prevention (CDC) Preventing Chronic Disease article, one of us explains how five common biases and flawed study designs are often employed to support (or defeat) research on important health policies and interventions. Each case illustrates weak study designs that cannot control for bias, contrasting that with subsequent stronger studies that debunk the dramatic but unreliable findings.

Flawed studies have dictated treatment protocols, backed unneeded or wrong medications, stopped useful medications, overstated the health benefits and cost-savings of electronic health records, and grossly exaggerated the death-reductions from hospital safety programs. These misguided interventions resulted in trillions of dollars spent with few demonstrated health benefits.

The CDC article is intended to help the public, policymakers, news media, and research trainees distinguish between dubious and credible findings in health care studies. It also provides a simple hierarchy of research design strength based on the ability to control for common distortions and biases.

Examples Of Strong And Weak Research Designs

The most engaging aspect of the article is that it presents research findings that at first seem reasonable, but are then shown to be due to faulty designs and uncontrolled common biases. For example, “healthy user bias” occurs when investigators don’t account for the fact that healthier individuals are often more health conscious and more likely to seek treatments than those who are less healthy.

This difference can make it appear that one of the flu vaccines reduced mortality in the elderly when it is simply the healthy user who deserves the credit. The proof? Several replications of that study method during the summer (when there is no flu, and hence no effect of flu on deaths!) found the same “effects” because flu recipients are already healthier and are less likely to die. Yet, the weak studies drove national flu vaccination policies based on the erroneous findings.

Another common example of biased and weak research design in the CDC paper is the claim by the Institute for Healthcare Improvement (IHI), that their national hospital safety program, “the 100,000 Lives Campaign,” saved over 120,000 lives. This claim was based on trends in mortality already occurring before the campaign started. That is, the claim was based on a weak design that couldn’t control for prior events such as increasing use of life-saving drugs.

We debunked the exaggerated finding by tracking 12 years of hospital mortality data before the campaign started, where we found no change in the already declining mortality trend. Yet the widespread policy and media reports led to several European countries adopting this “successful” and expensive model of patient safety.

We spend a lot of money on health care and we depend on medical research for our well-being. While no research is flawless, everyone should understand the strengths and weaknesses of the studies on which we base our policies, our economy, and our lives.