There is a growing consensus that the regulatory system for research is in need of reform. Established 21 years ago by the Common Rule, it has functioned via a rigorous environment to assure that risk in research is dealt with and transparency maintained.
The trigger for these regulations is a definition of research as a “systematic investigation…designed to develop or contribute to “generalizable knowledge.” When this definition is satisfied, an intensive set of requirements ensues including review, approval, and continued oversight by an Institutional Review Board (IRB); reporting requirements;, the necessity for informed consent (often highly complex); and other administrative components. If projects are not “generalizable,” (e.g., local hospital programmatic or quality review), they fall strictly under healthcare system purview rather than under Common Rule regulatory oversight.
The current system has a strong moral imperative and has been critical to mitigating risk for research subjects and providing transparency. However, it is burdensome and fails to take into account the considerable progress made in both the research and clinical enterprises over the last few decades: in research, technological advances in generating data on routine care, and in healthcare, much more stringent oversight.
These, as well as other concerns have prompted numerous experts to call for change and the federal government to undertake a rulemaking process. As the director of a large research funding program, I have observed firsthand the unintended consequences of the present system and find myself in general agreement with others looking to improve it. And while it is challenging to reform the research regulatory system, there is the means to do so in our clinical care and regulatory system.
The Evolving Landscape
Much has changed since adoption of the Common Rule. First, numerous advancements have made it possible to more efficiently collect and utilize research data from ordinary care and to address large evidence gaps. These advances — which include the burgeoning of the Electronic Health Record, large administrative and clinically rich databases, numerous information technology advances, and a variety of inventive methodologies (e.g., site randomization, natural experiments and, within the Department of Veterans Affairs (VA), point-of-care research) — have made it possible to collect and utilize research data from ordinary care in new, efficient ways.
With these tools, researchers and clinicians can now address large evidence gaps in ordinary care and systems and have made such research, including comparative effectiveness research (CER), a much larger component of the research enterprise. A major result is the development and nurturing of the Learning Healthcare System (LHS), in which both research and non-research data derived from clinical care are utilized for evidence-based improvement. However, the problem is that these lower-risk research studies are required to undergo a review process designed for higher-risk research; one that unnecessarily delays the process, adds to expense, and may altogether preclude the discovery and/or implementation of important findings related to ordinary care from being made.
The current “escape valves” for this system are “expedited” (IRB chair or designated member decides, IRB still oversees) and “exempt” (i.e., from Common Rule requirements) review for certain types of data or educational studies. However, expedited review still can be time-consuming and detail-engaged, with cost estimates about the same as full review. Expedited or exempt reviews account for a high proportion of new IRB submissions (56 percent expedited and 23 percent exempt at the University of Michigan in a 2009 survey; 41 percent expedited according to an earlier national survey), clogging the review process for higher risk research. Moreover, standards for defining these reviews vary greatly, and these categories are not available for most CER studies and a great deal of other research on ordinary care.
Anomalies occur as a result of these burdens. When local programmatic or quality review projects on ordinary care do not use research methodology, they are spared the necessity of meeting intense regulatory requirements. Thus, the more rigorous the approach to quality assessment (and therefore greater likelihood of data validity), the more burdens that apply, with the result that good research is discouraged. Also, what is “generalizability”? This term is not defined by the Common Rule and its meaning varies across IRBs. Is a study assessing a hospital’s approach to vaccination merely of value to that particular institution or might it have broader (generalizable) value? If so, many hospital program assessments might be considered “research.” Further, should the definition of “generalizable” be based on intent to generalize, as often done?
While publication is not a criterion for generalizability, does it at least suggest intent to generalize? And then, one may question the “generalizability” of the randomized clinical trial given its conduct in a separate environment, narrow entry criteria, etc.
Calls For Change
As already noted, many voices are now calling for progress in this area:
- Rulemaking within the Department of Health and Human Services is recommending an enlarged expedited category which allows the investigator to decide upon and start projects without preceding administrative review and provides broader consent and definitive privacy and security standards.
- Experts convened by the IOM viewing the Common Rule from the perspective of the LHS advocated a risk-based framework for oversight in which the level of risk should determine the degree of oversight. Routine clinical assessments of quality improvement that are not separate from routine clinical care would fall under clinical and not human-research oversight and would not require IRB approval. Also, continuous-improvement and minimal-risk studies (even if randomized) should be exempt from the Common Rule. In the view of these experts, patient consent would not routinely be required in this situation.
- A recent Special Report of the Hastings Center Report offered a profound ethical evaluation of the current research regulatory system based on the concept of the “common good” derived from philosopher John Rawls and also from the point of view of the LHS. The authors assert a moral imperative for learning and improvement of the healthcare system, they point to a faulty distinction between clinical care and learning, and they argue that “generalizability” is not a serviceable definition. They point out that systematic investigation (a part of the Common Rule definition of research) and collection of data are now ubiquitous in clinical practice. The authors promulgate seven moral obligations based on creating just and high quality healthcare and economic well-being, relating to the effects of the high cost of healthcare. (One of these is a new affirmative obligation of patients to learning based on reciprocity for the gain patients receive from learning activities).
A Proposed Framework
My own synthesis is in general agreement with these proposals to improve the present system. We already have in our clinical care and regulatory system the means and the rationale to change the research regulatory system. Rather than rely upon an artificial definition of generalizability as the triggering point for regulation, our focus should be on what is being studied. Is it low-risk, ordinary care, or higher risk, new interventions? If the former, why not assign regulatory oversight to a pre-existing, legitimized system precisely designed to deal with this level of risk: the clinical oversight system? This system is designed for care and programmatic evaluation of patient-care issues. Such an approach would eliminate undue burdens to low-risk research.
Using this framework, studies of usual care (whether randomized CER, database analysis or quality improvement exercises) would fall under the “clinical roof.” Consider a comparative study on two ordinary (FDA approved), commonly used treatments. Under the current system, such a study requires special research oversight. But why not assign such a study to the ordinary clinical system? This certainly would be the case if each treatment were administered separately.
Studies about new interventions (where there is an implicit risk because the interventions have not been tried before) or higher-risk research would fall under the research regulatory roof. Approved care should remain where it is now — on the clinical side — whether administered in an individual doctor’s office or within a protocol that studies how a particular intervention or test stands up against another usual care item. “Borderline projects” could be evaluated by IRBs as they are now. This approach offers a consistent regulatory approach based on risk.
Further, this approach has the advantage of being adaptable to the present clinical oversight system; one which has become even more rigorous in its oversight. This rigor is due to development of much more stringent hospital and regulatory oversight of ordinary care over the past few decades, often in response to accreditation mandates. Among these changes (with some variability) are: much more developed administrative and committee oversight of care (including by hospital governance boards); clear sanctions for malfeasance; strict accreditation and internal hospital standards concerning reporting of adverse events and other quality and safety issues (analogous to such disclosures to IRBs); and more specific privileging for physician competencies. As a whole, these oversight structures better equip clinical settings to deal with the risk level of ordinary care, including risk to privacy and HIPAA adherence.
One question is whether the current clinical oversight system has the requisite expertise in research. In general it does, but if a particular setting does not have this expertise, it should obtain it if it wishes to do research. Medical centers should not undertake evaluations with research methods unless they have the expertise.
Like all research, this new approach requires the application of certain key imperatives. Patients must have the right of consent and must be assured of transparency in every aspect of healthcare (research or otherwise). They are in a dependent position vis-à-vis providers and bear the risk. In my view, this circumstance overrides one objection to requiring consent — selection bias due to possible loss of certain subjects from studies; the issue could be approached statistically in any case. Patient consent to ordinary care studies could be obtained as a part of general clinical consent.
Explanations regarding research in these consents could be used analogously to consent language regarding teaching. (Activities not now requiring consent should be excluded.) Additional consents could be requested for certain studies as they are for clinical procedures.
Finally, all methods of study must be sufficiently rigorous to ensure the production of valid data. Further, there must be an implicit expectation that the findings can be translated into care. Otherwise, the information is of no use and any inconvenience incurred by the patient is inappropriate.
Our current research regulatory system is not only fraught with unintended consequences, it is an impediment to a well-functioning healthcare system and the provision of evidence-based patient care. As government efforts continue to modify regulations for human research protection, we must continue examining the underlying principles of our current system, including the very definition of research. Relying upon our established clinical oversight system as a means to ensure human subject protection for ordinary care, or low-risk research, offers a solid step forward toward these goals. And while there are many details to address, the benefits of doing so far outweigh the difficulties. Clearly, the need to generate timely, evidence-based research has emerged as a moral imperative in its own right.
Disclaimer: The views presented here are solely those of the author and do not necessarily represent the views of the US Department of Veterans Affairs.