States, patients, and voters are wrestling with the pros and cons of dramatic changes in public health insurance coverage, including extending, maintaining, or rolling back Medicaid expansion under the Affordable Care Act (Obamacare) — an often emotional topic of debate. The stories that are told about the effectiveness—or lack thereof—of coverage in improving health and health care usually relate compelling personal experiences, putting a human face on an otherwise abstract argument.

Policies are not enacted in the abstract; they affect real people’s lives, and we should all be concerned with how policy changes help or harm them. Unfortunately, as moving as those stories can be, they can just as easily lead us in the wrong direction as the right one. What we need is evidence, not anecdote.

Medicaid Coverage and Care Use

A key question in the Medicaid debate is whether expanded coverage reduces the use of the Emergency Department (ED) — getting people into the doctor’s office earlier, improving health, and reducing health care spending. Solid evidence is very hard to come by, but we had an opportunity to evaluate the impact of expanding Medicaid using scientifically rigorous methods rarely available in answering public policy questions.

In 2008, Oregon used a lottery to allocate a limited number of Medicaid slots — generating, in essence, a randomized controlled trial of Medicaid. This let us gauge the effects of the program itself, isolated from the usual confounding factors, and allowed us to collect thousands of stories—otherwise known as data!—about people’s experiences on and off of Medicaid.

We found that, contrary to many people’s expectations, Medicaid increased use of the ED by 40 percent. New research tells us that this increase persisted for at least two years, and that Medicaid did not make patients more likely to substitute a visit to the doctor for one to the ED.

In addition to this evidence, gleaned from a randomized evaluation of the experiences of tens of thousands of uninsured and newly insured Oregonians, we also conducted hundreds of interviews to learn how people felt that having Medicaid—or not—affected their lives. These individual narratives were invaluable for deepening our understanding of the experiences of those in study. But they also underscored how easy it is, in the absence of solid evidence, to find an anecdote to match any “answer.”

One newly insured patient told us, for example “Without coverage I wouldn’t have gone to ER those nights I was in crisis because I was already in crisis, and the idea of… the bills I would have had… just would have been too much for me to take on mentally or financially,” a story consistent with our overall finding. But an uninsured patient told us, “When I was uninsured, unless something happened where I had to go to the hospital, then I’d just go the emergency room and deal with it. Emergency rooms, from what I understand, they can never turn you away,” painting a vivid picture that, while true for this patient, does not happen to be representative of most people’s experiences.

Conflicting Anecdotes About Medicaid

This is an all-too-common situation. The Oregon experiment has produced a wealth of data and rigorous evidence on the impact of Medicaid on people’s lives. We found that Medicaid increases health care use, improves financial security, improves self-reported health, and reduces rates of depression. For nearly every outcome of interest, we heard stories of experiences that matched the average effect of the expansion on the newly covered population, as well as compelling stories that did not.

For example, we found that overall, Medicaid increased doctor’s office visits by about 50 percent — but not that every single person was able to see the doctor. One newly insured patient told us, “[Medicaid has] no doctors that are actually taking new patients,” while another told us that Medicaid gave him new access to care, when previously he “would not go to see the doctor because of the cost.” These stories are both true — but it’s impossible to tell which is more typical without more systematic data. While it’s important to realize that some patients on Medicaid may still have had trouble finding a doctor, it’s crucial to know that, overall, the program dramatically increased access to physicians.

Indeed, without that systematic information, policymakers and the public might be justifiably unclear about the effects of the program on access, with very different opinions based on their news sources. For example, in 2013, CNN ran a story about how life-saving Medicaid could be. Bettina Cox was a woman in Texas whose health was declining dramatically and who didn’t have insurance. She said that because she was uninsured, she had to let a tumor go unchecked for months and it grew to the size of a grapefruit. Once she was finally diagnosed with cervical cancer, she qualified for Medicaid, which, she said, saved her life. She described how if she had had Medicaid before the diagnosis, she would have seen a doctor much sooner.

But another woman, interviewed in 2011 by The New York Times, told a very different story about Medicaid. Nicole Dardeau, a middle-aged woman from Louisiana, described how she couldn’t work because of three herniated discs in her neck. She had Medicaid but couldn’t get treatment for her neck because she couldn’t find a surgeon who would accept Medicaid. She described Medicaid as a “useless piece of plastic.”

Anecdotes Cannot Substitute for Rigorous Research

It’s tempting to think that we can recognize which anecdotes are most representative of the “real story” when we hear them — but we really can’t. We might be more likely to believe the story that is more poignant. Or maybe the one that lines up with our prior beliefs. If you’d like to try your hand, we have compiled pairs of individual narratives, each of which tells a very different story. These are all true stories from Oregonians who participated in the Medicaid lottery, describing their experience of how Medicaid affected their health care, financial security, and health — but only half are consistent with the prevailing experience of most people.

This is why, wherever possible, we need to rely on evidence from rigorous research—rather than compelling anecdotes—to get an accurate assessment of a policy’s effects. In medical research, randomized controlled trials have long been the standard, but such rigorous methods are too rarely used to answer major health policy questions. Of course, the Oregon study reflects the experiences of Medicaid expansion in only one state and only one program, so additional rigorous studies are always of high value.

The Oregon example highlights that it is possible to use randomized evaluations to investigate important health policy questions. As researchers, we need to do a better job of providing the public with that evidence. Policymakers need to be receptive both to partnerships in building the evidence and to using the evidence to make better-informed policy decisions. And the media needs to resist the urge to allow unsubstantiated anecdotes to stand in for real evidence — despite the fact that readers may be drawn in by anecdotes.

Personal narratives can yield vital insight into how policies affect people’s lives, humanizing the stories behind the numbers and suggesting important areas for further research. Dismissing these compelling stories as “mere anecdotes” in favor of more rigorous—but impersonal—data analysis can seem heartless. But making policy based on unrepresentative anecdotes can inflict much greater harm on many more people. We hope that use of rigorous evidence will become the norm rather than the exception in health policy.