Editor’s note: This post is part of a series stemming from the Third Annual Health Law Year in P/Review event held at Harvard Law School on Friday, January 30, 2015. The conference brought together leading experts to review major developments in health law over the previous year, and preview what is to come. A full agenda and links to video recordings of the panels are here.

Increasingly, health systems are studying their own practices in order to improve the quality of care they deliver. But many organizations do not know whether the data they collect at the point of care constitutes research, and if so, whether it requires informed consent. Further, many investigators report that institutional review boards (IRBs) place unreasonable burdens on learning activities, impeding systematic inquiry that is needed to enhance care.

As a result, some commentators have argued that our human research participant protection regulatory framework needs a dramatic overhaul. Yet, it is not the regulations that must change.

Instead, IRBs should educate themselves about quality improvement and comparative effectiveness research, exempt studies that qualify for exemption, and provide waivers to informed consent, when that is appropriate. At the Department of Health and Human Services, the Office for Human Research Protections (OHRP) must clarify the regulations that have an impact on this type of research, create better guidance about how IRBs should regulate such research, including illustrative case studies to guide IRBs.

A Learning Orientation

It has been eight years since the Institute of Medicine called on health care systems to become “learning health systems,” capable of collecting information at the point of care in order to learn how to enhance quality of care, safety, and health outcomes. Learning health systems carry out many kinds of activities, including collection of administrative data about both patients and providers, chart reviews, observational studies, quality improvement activities, clinical trials of many kinds, and increasingly comparative clinical effectiveness studies, in which two proven medicines, or two customary care delivery interventions, are compared to determine whether one renders better outcomes, or is less risky or less expensive, for patients as a whole and for special subpopulations.

The airline and automobile industries adopted a “learning orientation” decades ago, but the U.S. health care system has been late to the game. Recently, however, a number of factors are generating interest in learning health systems. The formation of accountable care organizations and payment reforms initiated by The Affordable Care Act have reshaped some incentives to privilege value rather than volume, which in turn, has prompted more organizational self-study and a keener interest in realizing efficiencies that reduce cost without sacrificing quality. Likewise, these new financial arrangements, based on capitated payment for a given population of patients, can stimulate health systems’ sense of responsibility for population health, an obligation that brings with it the need to collect data on an ongoing basis.

There has also been a financial boon for the development of learning health systems. Both the Center for Medicare and Medicaid Innovation (CMMI) within the Centers for Medicare and Medicaid Services (CMS), as well as the Patient-Centered Outcomes Research Institute (PCORI) are providing a financial stimulus, through the funding of grants aimed at comparative effectiveness studies, quality improvement, and other methods for learning how to improve care. There is growing infrastructure for collecting, maintaining, and mining big datasets, including large databases of aggregated health care claims. Finally, major national initiatives, such as the Research on Care Community, spearheaded by the Association of American Medical Colleges, are providing supports and knowledge-sharing among more than 300 health systems eager to study their own practices.

Is There A Distinction Between Research And Treatment?

Despite this progress, there are many challenges. One hurdle is great uncertainty about the kind of ethical oversight required for the sorts of learning activities and data collection that learning health systems undertake. Is a given quality improvement or data collection activity “research?” When is an activity exempt from IRB oversight? If non-exempt, is informed consent required?

This uncertainty has been around quite a long time with respect to quality improvement activities. In 2006, bioethicists at The Hastings Center recommended that IRBs create sub-committees, with specialized training to oversee QI activities. And the U.S. Veterans’ Administration created a separate oversight system for what it calls nonresearch health care operations activities. Under both the VA and Hastings models, quality improvement activities require oversight, but are not considered research and do not go to IRBs.

In 2013, a team at Johns Hopkins University argued that we need a whole new way of thinking about oversight of learning activities which collect patient data at the point of care. Specifically, they called into question the bedrock conception upon which our national framework for human research participation is based, namely that it is possible to distinguish research from treatment. This distinction was first articulated in The Belmont Report and then codified in the Common Rule in the federal Code of Regulations which has been adopted by 17 federal agencies and has guided our approach to human research participant protection for nearly 40 years.

A key concept in both Belmont and the Common Rule is that there is a reliable distinction between treatment and research. But the Hopkins team argued that the distinction is no longer pertinent today within the context of learning health systems, when research is being done on treatment.

Further, they asserted that the “faulty research-practice distinction” leads to under-protection of patients from the arguably greater risks sometimes inherent in clinical practice, and over-protection of persons in low-risk comparative effectiveness studies and other low-risk learning activities. The over-use of IRBs for such activities wastes time and money, creates confusion, and unnecessarily burdens IRBs and investigators. Instead, they called for a new framework that would be based on the level of risk and burden posed by a given data collection activity, not on whether an activity is seen as treatment or research.

The Hopkins analysis is cogent and far-sighted: it will be increasingly difficult to reliably distinguish research from treatment, and many comparative effectiveness studies and other learning activities will be low risk. Further, as the Hopkins team points out, the emphasis should always be on appraisal of the level of risk, not on whether something is characterized as “research” or “treatment.”

But do we need a full-fledged overhaul of our regulatory framework? My answer is no. The Common Rule gives us the basic framework needed for these kinds of activities, and it can work, if other players—such as IRBs—step up.

A Case Study: SUPPORT

Let’s take a closer look at how the question of ethical oversight recently played out with respect to one very high profile comparative effectiveness research study, which was collecting data at the point of patient care. The Surfactant, Positive Pressure and Oxygenation Randomization Trial (SUPPORT) was a National Institutes of Health (NIH), multi-site clinical trial, in which premature neonates were assigned randomly to receive either high or low oxygen saturation in order to determine whether vision impairment, a known risk of prematurity, could be minimized at lower oxygenation levels. Importantly, both the high and low levels of oxygen fell within a range that is widely accepted in neonatal practice.

In 2013, the OHRP undertook a “for cause compliance oversight evaluation,” which means that OHRP was contacted by a third party who alleged that the SUPPORT study was noncompliant with oversight regulations.

As a result of its evaluation, OHRP faulted the study’s consent documents because they did not mention the increased risk of vision impairment at the high oxygen setting and the potential for an increased risk of death in the lower oxygen group. These were, as ORHP put it, “foreseeable risks” of being randomized to one or the other end of the oxygen saturation continuum, and should have been disclosed in the consent documents.

OHRP’s determination that the consent documents were faulty raised a firestorm, and split the bioethics community down the middle. Defenders of the SUPPORT study insisted that the SUPPORT investigators did not introduce any additional risks beyond those faced by all premature newborns requiring oxygen, and that participating newborns were receiving oxygen settings consonant with customary care. It was not the study that put the newborns at risk, but their prematurity. An equal number of bioethicists published an opposing editorial, claiming that there were foreseeable risks that should have been disclosed.

More than a year later, views were further polarized, with one pair of commentators claiming that the SUPPORT trial offered no prospect of benefit, and therefore should have gone to the Department of Health and Human Services for review, as pediatric trials with no prospect of benefit to participants are beyond the purview of local IRBs. Others found that claim patently wrong, but did believe that OHRP was correct in finding the consent documents deficient.

The SUPPORT study raises an important question: Should randomization always require consent? Reasonable people disagree on this point, but my own view is that we should begin from the premise that consent should be sought, not necessarily because randomization introduces risks, above and beyond what research participants outside the study would experience, but because patients (and their families) may have preferences and beliefs that would lead them to care about which arm they might be randomized into. Further, even without specific patient preferences, investigators starting premise should seek consent as a means of respecting the personhood of prospective participants.

However, existing federal regulations state that consent can be waived, when there is minimal risk in both arms of the study, when seeking consent would make the research impossible, and when the differences between the arms are not likely to be meaningful to patients. For example, there would be a strong argument for waiving the need for consent for a study which randomly assigned patients to receive medication adherence instructions either by mail or by phone.

Waivers of consent are particularly appropriate in cluster-randomized designs, in which whole units or organizations are randomly assigned to provide one kind of care delivery intervention versus another. In such studies, seeking consent is far more difficult and could render many worthy studies impossible to conduct. Seeking consent for cluster randomized designs also would seem to be premised on the false assumption that patients often can choose their treatments, when such a choice is often not available and is dependent on where one’s care is provided. If two customary approaches with low inherent risk in both arms are being compared in a cluster-randomized design, IRBs should carefully consider, and often grant, consent waivers.

When considering the need for consent for the SUPPORT trial, there was no disagreement: the investigators and IRBs believed consent was necessary and created consent documents. The disagreement was whether it was appropriate for the documents to only mention the benefits of one arm of the study and to omit mention of the risks in each arm.

Wherever you come out on this question, the SUPPORT controversy should not be interpreted as evidence that we need a full-scale regulatory overhaul. If, like supporters of the study, you believe there were no additional risks inherent to the study, or if like critics, you believe there were, still the disagreement is with what constitutes a foreseeable risk, and not with the regulatory framework’s insistence that foreseeable risks be disclosed.

OHRP’s letter of determination was modest in its requirement of the investigators. They were asked simply to state what changes they would put in place in the future to develop more complete disclosure of risks and benefits in informed consent documents. That such a modest request resulted in such extreme and varied reactions amongst the nation’s bioethicists says a great deal about the need to get this right.

A Delicate Balance

If we are to have more light and less heat, IRBs must educate themselves about comparative clinical effectiveness studies and quality improvement research. They must recognize—and use—their authority both to exempt some activities outright when risks are very low, and in other cases, to waive the need for consent.

IRBs should also help investigators develop easier-to-read, but more fully disclosed consent forms. Some health systems may wish to create special IRBs or IRB sub-groups with special expertise in comparative effectiveness and quality improvement research, or develop wholly separate oversight mechanisms for quality improvement, as the VA has done.

Patients should be apprised of a system’s commitment to collecting data and afforded ample opportunities to understand that such systematic learning is a hallmark of excellence. For its part, OHRP should issue easy-to-understand guidance. IRBs and investigators do not only need a narrative explanation, as OHRP has provided in draft form. In addition, it would be helpful if OHRP developed illustrative case examples of a range of different kinds of protocols and what OHRP thinks constitutes a responsible approach to each.

We are at a moment in time when there are many opportunities for learning how to improve the delivery of health care in the United States. But to fully succeed, we must design and carry out ethical oversight of learning health systems that is neither too burdensome nor too laissez-faire. We can do so within existing federal regulations, but only if IRBs step up.