Revisiting the Common Rule and Continuous Improvement in Health Care: A Learning Health System Perspective

By Richard Platt, Christopher Dezii, Barbara Evans, Jonathan Finkelstein, Don Goldmann, Susan Huang, Gregg Meyer, Heather Pierce, Veronique Roger, Lucy Savitz, and Harry Selker
December 30, 2015 | Discussion Paper

Introduction

In 2011, participants in the National Academy of Medicine’s (NAM, then the Institute of Medicine) Clinical Effectiveness Research Innovation Collaborative authored a discussion paper on the Common Rule from the perspective of continuous improvement and the learning health system. The paper highlighted the basic tenets of the NAM’s vision of a continuous learning health care system—in which “science, informatics, incentives, and culture are aligned for continuous improvement and innovation, with best practices seamlessly embedded in the delivery process and new knowledge captured as an integral by-product of the delivery experience.” (The Clinical Effectiveness Research Innovation Collaborative is part of the National Academy of Medicine’s Leadership Consortium for Value and Science-Driven Health Care. Available at: http://iom.nationalacademies.org/Activities/Quality/~/media/Files/Activity%20Files/Quality/VSRT/Core%20Documents/LearningHealthSystem.pdf (accessed, December 23, 2015)). It also provided an overview of the current regulatory environment, which includes institutional review board (IRB) oversight based on the Common Rule, and health information security and privacy based on the Health Insurance Portability and Accountability Act (HIPAA). (Guidance for scientific and ethical review of projects is currently based on the “Common Rule,” while the protection of health information security and privacy is based on the Health Insurance Portability and Accountability Act (HIPAA) – Health Insurance Portability and Accountability Act of 1996, 42 U.S.C. § 1320d-9. The Common Rule, “Federal Policy for the Protection of Human Research Subjects,” is based on the HHS 45 CFR part 46 Subpart A, by which identical language is used in the regulations for 15 federal departments and agencies, and which includes the creation and conduct of IRBs.) Key to this piece was that “patients should be given the benefit of continuous improvement with assurances of the protection of their information and that they will not be subjected to more risk than that associated with usual care without their consent” (IOM, 2011). The paper’s authors, including contributors to this current paper in the series, additionally proposed a framework for oversight and regulation that ensures protections for patients and study participants, while also clarifying the uncertainty that continues to hamper quality improvement (QI) and clinical assessment (Figure 1).

 

 

Now, as federal agencies revisit the regulatory environment associated with the conduct of clinical research in light of the transformative changes to the health care landscape since 2011, ( A Notice of Proposed Rulemaking to the Federal Policy for the Protection of Human Subjects that was promulgated as a Common Rule in 1991 was published in the Federal Register on September 8, 2015) there is an opportunity again to explore and discuss the roles of continuous learning, improvement, and research activities at the point of care.

Development of this paper was proposed during discussions of the NAM Clinical Effectiveness Research Innovation Collaborative, in which several of the authors are participants. Due to our affiliation, we use a learning health system perspective, especially as it relates to clinical effectiveness research and practice. This paper focuses on a subset of key principles foundational to continuous learning. We revisit the distinctions between activities that we believe should require formal oversight by an IRB and those that should not. We provide more in-depth discussion on two aspects of the learning health system: (1) the oversight of QI activities, and (2) the use of cluster randomization as a tool to advance learning, both for QI and other operations activities. We focus especially on establishing an IRB oversight mechanism that protects individuals while encouraging health care systems to learn as much as possible from their ongoing activities. Many other important regulatory issues regarding research involving human participants are beyond the scope of this paper.

 

Changing Landscape

Since our original discussion of these topics, the health care environment has experienced advances in information technology infrastructure, improvements in the design and analysis of clinical research, and increases in systems-based research. We have seen the emergence of new research networks, such as the National Patient-Centered Clinical Research Network (PCORnet) and the National Institutes of Health (NIH) Health Care Systems Research Collaboratory, that encourage data sharing across institutions, and the use of common data models with centralized governance. Funding mechanisms, including the American Recovery and Reinvestment Act and the Patient-Centered Outcomes Research Trust Fund, have contributed to the development of standardized infrastructure for comparative effectiveness and patient-centered outcomes research. This infrastructure also allows for the use of data derived through the process of care for continuous improvement within and across health delivery systems. Additionally, prompted in particular by the work of the Patient-Centered Outcomes Research Institute (PCORI), we have witnessed a shift in how patients interact with the process of evidence generation, with the proliferation of patient co-investigators, patient-led research networks, patient-generated data, and patient-initiated data sharing and research initiatives. The implications of increased use of health information technology, patient engagement, and interconnected networks hold much promise for efforts in precision medicine and interoperability. These lead us closer to achieving a national learning health system designed to generate and apply the best evidence for collaborative health choices of each patient and provider; to drive the process of discovery as a natural outgrowth of patient care, and to ensure innovation, quality, safety, and value in health care (IOM, 2007).

In parallel, we have also seen the increased use of multi-institutional QI and research activities. All of these stimulate new questions about which activities require IRB oversight, the definitions of “human subjects” and “minimal risk,” and the role of consent. As we learn to navigate within the new health and health care environment, it is important that regulatory policies reflect our changing landscape and allow enough protection to adequately protect patients and their interests as well as enough flexibility to adequately support continuous evaluation, generation of evidence, learning, and application of that knowledge for improvement.

 

Regulatory Environment

In September 2015, the U.S. Department of Health and Human Services and 15 other federal departments and agencies announced a Notice of Proposed Rulemaking to the Common Rule. We appreciate the work of the Common Rule writing team in developing a policy that addresses the rapidly changing health care environment and also considers the many public comments provided to the 2011 Advance Notice of Proposed Rulemaking. Additionally, we were pleased to find many of the recommendations from the 2011 NAM discussion paper incorporated into the revised proposal, especially those related to less restriction on the use of routinely collected health care information and efforts toward streamlining informed consent requirements. We agree with the addition of authorization for secondary use of routinely collected data for future unspecified research, as long as notification of this possibility is made to patients before the data are collected. We believe that IRBs should be encouraged to waive the requirement for individual informed consent for comparable uses of legacy data, assuming appropriate data security protections are in place.

Additionally, the authors strongly support the proposed change that a single IRB have responsibility for review for most multisite trials. Currently, securing IRB approvals for what may be more than a dozen sites, after the initial institution’s IRB review, can delay a study’s initiation by more than a year. Moreover, the changes that one IRB makes sometimes requires additional reviews by other IRBs; and yet rarely, if ever, do these changes have important consequences for protection of human participants. We believe the IRB should be selected by the principal investigator, with the approval of the funder. This change is true to what we understand to be the goal of these revisions: to take unnecessary inefficiencies out of the oversight process without compromising protections for human participants.

 

Generating Knowledge Through Continuous Improvement

For greater progress toward continuous learning, there exists a need for additional clarification on when and whether QI activities ought to be considered research requiring human participants’ protection versus QI that is considered part of normal health care operations. The National Academy of Medicine’s vision of a learning health system involves a host of activities, including measurement, comparison, evaluation, systematic introduction of accepted therapies and commonly accepted low-risk treatments with an operational focus, and QI activities designed to bring care closer to accepted standards, as well as the sharing of experience and information from these activities. We believe that these activities do not add additional risk to patients, beyond preexisting risks of care, and that coordination of these activities among organizations either is, or should become, the norm.

From the authors’ perspectives, defining which QI, research, or other continuous learning activities require federal oversight and which ones do not is critical. Unnecessarily subjecting QI activities to IRB oversight impedes learning, slows progress towards closing gaps in the quality and safety of health care, discourages scientifically rigorous systems improvement and evaluation, does not add additional patient protections, and does not align with current hospitals’ authority to change system-level care (Casarett et al., 2000; Platt et al., 2014). Indeed, health systems across the United States are mandated to engage in QI; our goal is for this mandate to happen at the highest standards. We are concerned that by suggesting that multiple institutions can learn from such efforts—by disseminating what is found to other institutions—a flag for oversight might be triggered when no negative impact for participating patients exists beyond that of ordinary care. Overregulation can hamper efforts to improve care between commonly used interventions and the subsequent dissemination of lessons learned among health care organizations, efforts that in so many instances seem to change the risk or burden profile for patients little or not at all.

Well-done improvement efforts should use the best available methods, resources, and patient information as part of an organization’s core operations. They should pose minimal risks and burdens to patients; should focus on bringing care in line with evidence-based standards and improving care beyond that currently delivered, as determined by leaders responsible for clinical care; should be led and sponsored from within the health system; should be subject to the oversight of usual clinical care as opposed to oversight by regulations governing research; and should measure process and outcomes related to implementation. The Notice of Proposed Rulemaking has raised several issues that should be clearly addressed in any final rule making:

  • QI programs involving implementation of commonly accepted care practices, with the goal of increasing adherence to such practices, should not be subject to IRB review whether or not they also measure the outcomes of that practice, assuming that they do not predictably increase risks or burdens. The intention to assess the outcomes related to a QI activity by itself should not make the activity subject to IRB review. A basic tenet of the learning health system is the expectation of continuous learning from routine care. This is consistent with the need for institutions to conduct meaningful QI and with regulations imposed by oversight bodies (including those of local, state, and federal governments).
  • Evaluation of competing QI strategies for implementation of accepted practices should not be subject to IRB review, so long as the burden on patients is minimal (consistent with typical standards for QI projects, including anticipation of potential unintended consequences), the burden on staff does not detract from the overall quality and safety of patient care, all compared strategies are considered “standard,” and the program is sanctioned by the organization as a QI activity.
  • Evaluation of competing low-risk interventions that would typically be implemented in a QI framework without further research should also not be subject to IRB review. These typically are not direct biomedical treatments but ancillary aspects of care. Designs that introduce a practice in a subset selected at random or sequentially in an entire organization should not, by themselves, constitute research requiring IRB oversight.
  • Evaluation of minimal risk activities that seek to improve care practices related to a local operational need should also not be subject to IRB review. This could ensure accepted practices are in place but also include improvement processes to provide enhanced quality of patient care or experience. The recognition that QI activities can improve care over accepted practices or where accepted practices are unknown or poorly defined is a key aspect of a learning health system.
  • Most QI seeks to intervene on processes of care and to measure results (processes and outcomes) over time, often using methods such as statistical process control. However, the use of other analytic assessment methods, such as interrupted time series analysis or randomization of clusters (rather than individual patients) between alternatives all of which reflect accepted medical practice—especially stepped-wedge randomization (i.e., systematic variation of care rolled out over time)—should not make the activity subject to IRB review.
  • Dissemination of QI results, or the intention to disseminate results, including by publication, should not by itself make the activity subject to IRB review. (HHS has taken the position, in guidance, that “the intent to publish is an insufficient criterion for determining whether a quality improvement activity involves research” and “Planning to publish an account of a quality improvement project does not necessarily mean that the project fits the definition of research; people seek to publish descriptions of nonresearch activities for a variety of reasons, if they believe others may be interested in learning about those activities.” See FAQ guidance at: http://www.hhs.gov/ohrp/policy/faq/quality-improvement-activities/intent-to-publish.html (accessed, December 23, 2015).
  • Multi-institution collaborations of otherwise routine QI activities should not be subject to IRB review. Efficient learning across systems is critical to improving care nationally, particularly when events are so rare that learning within a single institution may take many years. The protections on sharing of individual-level data under HIPAA afford sufficient protection for privacy.

 

Additionally, many routine health care operations decisions that are not QI activities are made without strong evidence and based on a host of factors, including cost. Examples include the selection of one or another default thiazide antihypertensive drug for a health plan’s formulary, the time of day medications are administered, approaches to improving adherence to clinical care pathways, and the specific composition of a rapid response team. We believe that decisions like these should be examined systematically and rigorously, generating evidence to inform care, both locally and more widely, and that the learning health system should enable and encourage innovation and improvement, including through rigorous evaluation of alternative strategies. Oversight of these activities should encourage systematic learning from these activities, as discussed above for QI activities.

Clarification for these points was the impetus for the oversight framework developed for the original piece (Figure 1).

 

Cluster Randomization

Another issue that needs clarification is when informed consent should be sought from patients when research studies use cluster randomization, meaning when entire practices, institutions, health plans, insurers, workplaces, or whole communities are randomized. We focus here on studies that evaluate policies, processes, and default treatments that are typically made at an institutional level.

The current regulatory environment does not provide sufficient guidance for oversight of cluster-randomized trials. We expect cluster randomization to become increasingly used to address operations questions because they often provide the clearest answer to the question of how a policy or practice works under conditions of actual use, because they are operationally efficient, and because they are ideally suited to multisite research collaborations that can answer questions quickly. The oversight regime should facilitate the use of this research method.

Questions regarding cluster randomization arise in both QI and research. (Search of PubMed on the terms cluster randomization indicated that these methods are increasingly being used. Starting at 16 research articles identified by “cluster randomization” in 2000, the number has increased to over 100 articles in 2015.) For QI with randomization, whether it is a reasonable (approvable) approach and what notification of patients should be required should be determined by whoever is reasonably overseeing the institution’s QI program. As QI, this should not be within the jurisdiction of the Common Rule at all. If a QI program does not have a formal mechanism for oversight of such “nonroutine” QI projects, then an institution can designate an IRB to do this (Finkelstein et al., 2015).

For research, IRBs should take into account the normal processes for choosing the policies or practices affecting the cluster. In particular, for decisions normally made by institutional leaders rather than providers and patients, then the decision about the appropriateness of randomization should be led by IRB members or advisors to the IRB who ordinarily make such decisions. Consent from members of clusters is often not feasible without substantially perturbing the care process or defeating the purpose of the study. Consent should be waived when this is the case.

 

 

Priorities for Action

Action Needed Regarding Quality Improvement Exclusions

The 2011 NAM discussion paper identified a need for clear guidance to IRBs about the distinctions between QI and research. To address this need, the authors developed a risk-based framework in which oversight is commensurate with the level of risk imposed by the study as well as with whether the assessment is primarily of operational value (as determined by the institution). We still maintain that a practical framework such as this should be used to help navigate the uncertainty of oversight.

As highlighted above, the Office for Human Research Protections (OHRP) has issued guidance at various times about QI and research distinctions. (U.S. Department of Health & Human Services. Quality Improvement Activities FAQs. Retrieved from: http://www.hhs.gov/ohrp/policy/faq/quality-improvement-activities/ (accessed, December 23, 2015)). However, we propose that additional direction and clarity be provided, perhaps through the development of an online tool (for projects near the border of QI and research) to allow QI programs and investigators to determine if a QI program is an excluded activity that does not require IRB oversight. A number of principles and frameworks have been suggested to differentiate QI from research (Finkelstein et al., 2015; Casarett et al., 2000; Ogrinc et al., 2013; Taylor et al., 2010; Hagen et al., 2007).

Attributes to guide decision making that an activity is QI might include the following:

  • The primary purpose of the project, as determined by leaders responsible for clinical care, is to improve care practices related to a local operational need. This includes bringing care in line with accepted standards, even when little evidence exists to define such standards.
  • If the project includes a novel method for implementation, the clinical practice implemented should be within accepted standards and the project should be consistent with the other attributes of QI.
  • The project’s risks and burdens to patients are consistent with practices in common use in QI.
  • There is meaningful leadership of local activities from within the health system. That is, decision makers related to the project are leaders of the clinical program or institution (or their delegates), and decisions regarding changes in processes of care are made at a programmatic level, rather than as individual clinical decisions at the provider/patient level.
  • If an intervention is differentially applied among groups (clusters) it is to test strategies for implementation of an accepted standard or practice and not principally a test of the effectiveness of the practice itself.

 

Implementation approaches may affect providers of care differentially, but if done in the context of QI this does not generally constitute research. Introduction of a practice in a subset selected at random or sequentially in an entire organization using, for example, a stepped-wedge method of determining the order, does not, by itself, mean that a practice is research. None of the foregoing activities should require review by an IRB, though institutions should develop other processes for review of QI activities that are transparent and balance patient burdens with system improvement. QI activities with more complex designs (e.g., large-scale, stepped-wedge or cluster-randomized designs) should be reviewed by an institutional oversight group with expertise in QI.

When a project with some QI attributes is determined to be research and warrants IRB oversight, we propose that facilities should assure that the IRB has specific competence in the QI domain.

Furthermore, timeliness of action is a very important consideration when dealing with an operational issue that needs to be addressed to improve patient safety. Notably, there is often no guidance for many of the issues that arise at a facility. Lack of national guidance should not impede a facility from addressing a patient safety issue using reasonable approaches, and this should not require IRB oversight.

In contrast to the characteristics of QI activities noted above, we believe the following attributes are characteristic of research requiring IRB oversight:

  • The study is not considered to be organizational operations by clinical leadership.
  • The study introduces policies, practices, and/or treatments not in common use.
  • The burden of data collection on participants (patients) is greater than normal health care operations.
  • The study is randomized to two arms with very different profiles in terms of likely patient preferences (e.g., surgery to medical care).

 

Action Needed for Oversight of Cluster-Randomized Trials and Stepped-Wedge Introduction of Practices

There is also a need for clear guidance to IRBs regarding oversight of research using cluster randomization, including stepped-wedge designs, when these study designs are intended to evaluate institutional policies and practices. This guidance should recognize that institutions frequently lack compelling evidence in favor of one or another policy and that there is affirmative value in developing evidence about relative performance of measures they would otherwise introduce. This form of randomization typically provides much stronger evidence than do observational methods, such as interrupted time series analysis.

Our focus here is on trials of interventions that are made at an institutional or organizational level. Decision-makers might be clinical leaders, formulary committees, purchasing departments, or others. In the case of community-level interventions, there may be many decision-makers, but they are typically not members of the cluster. The Common Rule should build on this normal decision-making process for these kinds of interventions. Oversight should incorporate the perspectives of these decision-makers and this decision-making process.

For non-QI cluster-randomized studies comparing commonly accepted treatments, policies, or procedures, for which the decision-maker typically operates at the level of the clinical program or institution rather than the provider/patient dyad, IRBs should require that consent for randomization be obtained from the usual decision-makers for the intervention being evaluated. Additionally, IRBs should include members that are representative of similar decision-makers in assessing the merits of the proposed randomized study. We do not believe consent of the individuals who are members of the cluster should be required. We do support transparency regarding cluster randomized trials to ensure that individuals and the broader community are aware of these activities.

 

Conclusion

Health care is changing rapidly. With the growth of learning networks, multi-institutional data sharing, and emphasis on implementation and dissemination of continuous improvement and evidence generation, we will continue to see more interest in the conduct of voluntary collaborative improvement activities.

We want to better ensure that these activities are undertaken with a strong scientific foundation and use of best methods. The regulatory environment should allow flexibility to ensure effective learning activities and a reduction of the regulatory burden. Policies should provide concrete examples, as we have done here, of when it is appropriate to allow health care organizations and others to provide oversight for their own improvement efforts and when it is appropriate to seek IRB review for clinical research. Organizations have much to gain from comparing commonly used treatments that pose minimal risk, within their own settings and in collaboration with other systems. These efforts should take place in a regulatory environment that promotes innovation, generalizable knowledge, and improvement as part of common operating practice.

The revisions to the Common Rule, the first in almost 25 years, represent a remarkable opportunity to codify the essential attributes of a learning health system and provide clarity on those activities that should be routinely conducted to improve health and clinical care, but need not be independently reviewed by an IRB as research.

 


References

  1. Casarett, D., J. T. Karlawish, and J. Sugarman. 2000. Determining when quality improvement initiatives should be considered research: Proposed criteria and potential implications. Journal of the American Medical Association 283(17):2275-2280. https://doi.org/10.1001/jama.283.17.2275.
  2. Finkelstein, J. A., A. L. Brickman, A. Capron, D. E. Ford, A. Gombosev, S. M. Greene, R. P. Iafrate, L. Kolaczkowski, S. C. Pallin, M. J. Pletcher, K. L. Staman, M. A. Vazquez, and J. Sugarman. 2015. Oversight on the borderline: Quality improvement and pragmatic research. Clinical Trials 12(5):457-466. https://doi.org/10.1177/1740774515597682
  3. Hagen, B., M. O’Beirne, S. Desai, M. Stingl, C. A. Pachnowski, and S. Hayward. 2007. Innovations in the ethical review of health-related quality improvement and research: The Alberta Research Ethics Community Consensus Initiative (ARECCI). Health Policy 2:e164-e177. Available at: https://pubmed.ncbi.nlm.nih.gov/19305726/ (accessed July 17, 2020).
  4. Institute of Medicine. 2007. The Learning Healthcare System: Workshop Summary. Washington, DC: The National Academies Press. https://doi.org/10.17226/11903
  5. Selker, H., C. Grossmann, A. Adams, D. Goldmann, C. Dezii, G. Meyer, V. Roger, L. Savitz, and R. Platt. 2011. The Common Rule and Continuous Improvement in Health Care: A Learning Health System Perspective. NAM Perspectives. Discussion Paper, National Academy of Medicine, Washington, DC. https://doi.org/10.31478/201110a
  6. Ogrinc, G., W. A. Nelson, S. M. Adams, and A. E. O’Hara. 2013. An instrument to differentiate between clinical research and quality improvement. Institutional Review Board 35:1-8. Available at: https://pubmed.ncbi.nlm.nih.gov/24350502/ (accessed July 17, 2020).
  7. Platt, R. N. E. Kass, and D. McGraw. 2014. Ethics, regulation, and comparative effectiveness research: Time for a change. JAMA 311(15):1497-1498. https://doi.org/10.1001/jama.2014.2144.
  8. Taylor, H. A., P. J. Pronovost, and J. Sugarman. 2010. Ethics, oversight and quality improvement initiatives. Quality and Safety in Health Care 19:271-274. Available at: https://jhu.pure.elsevier.com/en/publications/ethics-oversight-and-quality-improvement-initiatives-3 (accessed July 17, 2020).

 

DOI

https://doi.org/10.31478/201512d

Suggested Citation

Platt, R., C. Dezii, B. Evans, J. Finkelstein, D. Goldmann, S. Huang, G. Meyer, H. Pierce, V. Roger, L. Savitz, and H. Selker. 2015. Revisiting the Common Rule and Continuous Improvement in Health Care: A Learning Health System Perspective. NAM Perspectives. Discussion Paper, National Academy of Medicine, Washington, DC. https://doi.org/10.31478/201512d

Acknowledgments

The authors would like to thank Nancy Kass, Johns Hopkins University, for her expert guidance; and Marianne Hamilton Lopez, National Academy of Medicine, for her valuable assistance in facilitating the development of the paper. Thanks also to Rosheen Birdie, National Academy of Medicine, for assistance in the preparation of the paper.

Disclaimer

The views expressed in this Perspective are those of the authors and not necessarily of the authors’ organizations or of the National Academy of Medicine. The Perspective is intended to help inform and stimulate discussion. It has not been subjected to the review procedures of, nor is it a report of, the NAM or the National Academies of Sciences, Engineering, and Medicine. Copyright by the National Academy of Sciences. All rights reserved.


Join Our Community

Sign up for NAM email updates