Editor’s note: A full list of coauthors and their affiliated institutions appears at the end of this post.

The work to prevent central line-associated bloodstream infections in intensive care units is one of few national efforts to use empiric data to document a decrease in patient harm across the United States. A notable contributor was the On the CUSP: Stop BSI national initiative, which built upon the work at the Johns Hopkins Hospital and in the state of Michigan. This initiative spread to 1100 hospitals in 44 states, the District of Columbia, and Puerto Rico. The rate of an infection fell to 1 infection per 1000 line-days in the majority of hospitals, a rate deemed impossible just a few years ago.

In this post, we summarize the fractal infrastructure and overall results of the initiative, explore lessons, and offer policy recommendations for other national efforts to reduce preventable patient harm.


Many efforts contributed to the national decline of central line-associated bloodstream infections (CLABSI) in intensive care units (ICU). (See here, here, here, here, here, and here.) Instrumental in these efforts were the Centers for Disease Control and Prevention’s (CDC) guidelines and infection control strategies, and hospital epidemiology and infection control. This impressive change offers an example of how the collaborative efforts of many, which were informed by evidence, research and valid measurement, can achieve widespread reduction in preventable harm.

One contributor to the decline in CLABSI has been an unprecedented national commitment to reduce these infections. The knowledge that these infections could be prevented grew from a pilot intervention in two ICUs at the Johns Hopkins Hospital in 1998 to state-level collaborative projects, based on the Hopkins experience, in over 100 Michigan ICUs in 2003 and in 23 Rhode Island ICUs in 2004. In addition, a phased, cluster-randomized trial in two Adventist health systems in 2007 demonstrated a causal link between the pilot intervention and decreased CLABSIs; rates fell by 81 percent.

Based on the mounting evidence, the Agency for Healthcare Research and Quality (AHRQ), the Price Family Foundation, and the Sandler Foundation for the Jewish Community Endowment Fund funded On the CUSP: Stop BSI, from 2008 to 2012. This initiative was led by teams from the Johns Hopkins Medicine Armstrong Institute for Patient Safety and Quality (Armstrong Institute), the MHA Keystone Center, and the Health Research Education Trust (HRET), who was the prime contractor.

We described our efforts to pull external levers, in the way of social, economic and regulatory pressures, to influence change. This post describes the fractal infrastructure and overall results of the On the CUSP: Stop BSI initiative, and explores lessons that could inform other national efforts to reduce preventable patient harm.

On the CUSP: Stop BSI National Initiative

Goals and interventions. The national initiative sought to eliminate CLABSIs at the unit level, to achieve a mean rate of ≤1 per 1000 catheter days, and to build capacity at the unit and state levels to support this initiative and future quality improvement efforts.

Unit-level teams implemented three interventions. The first intervention was the Comprehensive Unit-based Safety Program (CUSP) to improve safety culture and teamwork (program details have been published). The second intervention, to prevent CLABSI, included checklists of evidence-based practices for catheter insertion, maintenance, and removal; tools to identify local barriers to implementing these practices; and implementation guidance to ensure patients consistently received these practices. (Detailed description of this intervention has been published here, here, here, and here.) The third intervention was the measurement and feedback of CLABSI data to improvement teams and senior leaders.

Organization. The national initiative team, comprised of the HRET, the Armstrong Institute, and the MHA Keystone Center, created a fractal infrastructure for the program at the national, state, and hospital levels (Figure 1). This infrastructure provided clear vertical links between levels and horizontal networks within levels, providing program support at each level, supporting a model for horizontal spread, allowing for independence and interdependence, innovation and accountability, scalability and durability.

Figure 1.  Fractal Infrastructure of National Initiative

National level. The national initiative was integrated as a major component of the Department of Health and Human Services’ (HHS) National Action Plan to Eliminate Healthcare-Associated Infections, launched in 2009.  The national team aligned efforts with a variety of agencies, including the CDC, the Centers for Medicare & Medicaid Services (CMS), and the Office of the Assistant Secretary of Health and Human Services; with several organizations, including the American Hospital Association and The Joint Commission; and with numerous professional societies and patient advocacy groups. This broad and diverse group of collaborators brought awareness and support to the initiative and synergized with other efforts to reduce CLABSI, including enhanced social pressure through public reporting of CLABSI rates and economic incentives for hospitals to reduce infection rates.

The initiative assigned each state a project coordinator from HRET, a data expert from MHA, and a research coordinator and faculty quality improvement researcher from the Armstrong Institute to coach and support hospital teams and state leads.

State level. Each hospital association recruited and worked with participating hospitals and communicated with the national team, and many partnered with their state health department and quality improvement organization. The hospital association coordinated the teleconference training, data collection, and collaboration among participating hospitals, advised the national team on local contextual barriers and facilitators, and built the infrastructure for sustainability. The program leads for each hospital association set the tone and led the initiative in their states.

Hospital level. All hospitals in participating states were eligible and encouraged to participate in the initiative. The hospital associations asked hospital chief executive officers (CEOs) to sign letters of commitment, promising to support staff time to participate and collect the required data, and to review their CLABSI rates. Hospitals were not compensated for participation. We asked hospitals to select one or more units to implement the intervention. This model of spread was used because we found that when unit level clinicians and senior leaders are engaged and committed the intervention is successfully implemented.

Each participating unit created a CUSP team to champion the project and submit data. The national team recommended that hospital teams include unit-level physician and nurse leaders; front-line medical, nursing and ancillary staff; an infection preventionist; hospital quality and safety leaders; and a senior executive. Hospitals with multiple units assigned a leader to coordinate internal efforts and communicate with the hospital association.

Timeline and roll out. In October 2008, HRET started recruiting hospital associations for the initiative, and participating hospital associations shortly thereafter recruited hospitals. States participated in one of six cohorts. Spacing cohorts over time allowed national team resources to be managed more efficiently, provided some states that needed it more time to recruit facilities, and enabled lessons learned in earlier cohorts to be applied in later ones. In some states, additional hospitals joined after their state cohort started; the national team provided these hospitals with additional training.

Data collection and analysis. Trained infection preventionists used standardized definitions and surveillance methods from the CDC National Healthcare Safety Network (NHSN) to identify and report CLABSIs. Each month, data were entered into the MHA Keystone central database by CUSP teams, or were uploaded by participating hospitals if the data were submitted directly to the NHSN. The data included the number of CLABSIs and the number of catheter-days. We did not collect process measures because line insertion was random, it was not feasible to supply observers to monitor compliance, self-reported compliance has been notoriously inaccurate, and the prior cluster-randomized trial demonstrated a causal relationship between the intervention and reductions in bloodstream infections.

Using the monthly data, we generated a quarterly rate and assigned each rate to a period relative to when the intervention was implemented: baseline (up to 12 months before the implementation period), or quarterly for the 18 month post-implementation period. The national team analyzed available data for the first five cohorts.  We reported mean CLABSI rates over time.  The Johns Hopkins University School of Medicine Institutional Review Board approved this research and waived consent for staff and patients (IRB NA_00022024).

Results. Forty-four states, the District of Columbia, and Puerto Rico participated in the national program. Twenty-three states started the program in 2009, twelve states plus the District of Columbia started in 2010, and nine states plus Puerto Rico started in 2011. Collectively, over 1,100 hospitals and 1,800 CUSP teams participated in the initiative. Figure 2 illustrates the distribution of participating hospitals across the United States.

Figure 2.  Distribution of Hospital Participation

The percentage of hospitals participating within a state varied widely (range, 9 percent to 93 percent), and was largely driven by state hospital association recruitment efforts. The majority of participating units were adult ICUs (71 percent), and the remaining were adult acute care units (24 percent) and pediatric units (5 percent). Hospital and unit characteristics are described in an AHRQ technical report. At the hospital level, there was 13 percent missing data at baseline, and 11 percent at quarter 6.  At the state level, there was 0 percent missing data at baseline and 4 percent at quarter 6.

The CLABSI rate across ICUs decreased from 1.9 infections per 1,000 line-days at baseline to 1.1 infections at 18-months post-implementation, a relative reduction of 41 percent (Figure 3). The percentage of ICUs and non-ICUs with zero infections for one quarter or more increased from 30 percent (baseline) to 68 percent (18-months post-implementation). A small percentage of units remained with ICU CLABSI rates over three infections per 1,000 catheter-days.

Figure 3.  CLABSI Rate Decrease in ICUs Participating in National Initiative


The percentage of hospitals with a CLABSI rate of <1.4 infections per 1,000 line-days increased from 48 percent at baseline to 67 percent at 18-months post-implementation (Figure 4, panel A).  The percentage of states with a CLABSI rate of <1.4 infections per 1,000 line-days increased from 35 percent at baseline to 76 percent at 18-months (quarter 6) post-implementation (Figure 4, panel B).

Figure 4.  Percent of Hospitals and States by CLABSI Rate


Lessons Learned And Policy Implications

The On the CUSP: Stop BSI initiative was associated with reduced CLABSI rates among 1,100 hospitals in 44 states to levels once deemed unattainable. While we cannot establish a causal relationship between this initiative and reduced CLABSI rates, the consistency with prior results, including the cluster randomized trial, provides support for a causal relationship. Still, many prior and contemporaneous activities supported these results: including public investments in research to measure and reduce CLABSI; hospital investments in an infection control and epidemiology infrastructure and ICU physician staffing; public reporting of infection rates; The Joint Commission’s National Patient Safety Goals; and other hospital efforts to reduce CLABSI.

New technology likely also played a role in reducing infections. For example, most hospitals added chlorhexidine to central line kits as part of the Michigan project, and many hospitals across the US are using chlorhexidine-impregnated dressings and coated catheters. Whether pay-for-performance helped reduce these infections is not clearly evident and needs further study. Since this is one of the rare examples of a measurable reduction in preventable patient harm across the US, it is worth reflecting on the lessons learned from this initiative and how it could inform future federal programs and policy initiatives.

Lesson 1: A national program should be sufficiently ripe before a national roll out. A ripe program will have the following elements: interventions that are supported by robust evidence demonstrating they reduce harm; a valid, standard measure of harm; a clear theory describing how the interventions reduce harm; and effectiveness trials demonstrating generalizability. Many national programs lack one or more of these essential elements, oftentimes valid standard measures. Policy makers and funders should ensure that a program meets these criteria before a national roll out.

Lesson 2: A national program should have a clear chain of accountability, with a sufficient infrastructure at each level to support the work. We found that an effective structure for national efforts was a fractal infrastructure (Figure 1) that included teams at the national, state, health system, hospital, unit, and individual clinician levels. At each level, there should be goals and measures, a quality management infrastructure of leadership support and skilled staff with protected time and resources to conduct the work, accountability for achieving results, and opportunities to network horizontally to learn from each other. Efforts to improve quality in outpatient settings could use a similar infrastructure. Program leaders should define the vertical and horizontal links prior to a national effort.

A major barrier when implementing hospital quality improvement efforts is the lack of a quality management infrastructure with designated roles and resources.  Ironically, the unit level (where care is delivered) most often has little or no resources in the form of dedicated staff time and training to improve care. Thus, sponsors must assess the local resources required and allocate them to hospitals and teams. Many programs have clear breaks in this chain. For instance, hospital leaders may commit to a program, possibly assign a point person without determining the time commitment, then retreat to focus on other hospital business. Conversely, clinicians may enroll in an improvement program but not seek or garner senior executive support. Every stakeholder must take responsibility for the work, from the CEO to housekeeping services.

Lesson 3: A national program should align the work of all stakeholders around a common standard measure. A program must also be unified by a common, standard measure that is feasible and used by stakeholders at all levels of the program, and that clinicians believe is valid and useful in tracking performance. Hospitals participating in the initiative had real-time access to their data.  We provided monthly reports of the number of months gone without an infection because this was a concise gauge of a unit’s performance. We reported quarterly CLABSI rates because there was substantial random error with monthly estimates.

Many national programs use one measure to inform local teams and another measure to evaluate the national impact, generally using administrative data. This can result in biased data both locally and nationally, limiting the ability to make inferences about whether the program worked.  National programs should collect data that can be aggregated from units to higher levels, ultimately making national estimates of their impact.

Lesson 4:  A national program should summarize the evidence and encourage local clinicians and administrators to modify the intervention to fit their culture and needs. The national CLABSI initiative worked partly because there was a centralized body to provide support at every level, and partly because it was organized by states into a clinical community led by local clinicians who adapted the intervention to local needs.

Interventions must be developed with rather than over clinicians. When interventions are imposed upon professionals rather than developed with them, interventions are resisted, are often not implemented, and are sometimes ineffective.  Too often, patient safety interventions are rigid and dictated to clinicians from managers or external groups.

In the national initiative, each state and hospital used local wisdom to implement the program. The data collected, CDC definitions used, and evidence-based CLABSI prevention practices were standardized, but implementation of the practices was locally modified. For example, every hospital developed a checklist of practices to reduce CLABSI. Although each hospital included the five evidence-based practices on its checklist, each hospital modified somewhat how it designed the checklist, how it implemented the checklist, and what interventions it used to ensure patients received the checklist items, ensuring the checklist was accepted and effective locally.

Researchers often find local modification challenging, anticipating that without a completely standardized intervention it will be hard to publish the results. Yet without local modification and acceptance, the intervention will likely fail. Therefore, we viewed dissemination as an adaptive rather than a dictated process, in which participating teams co-created the intervention.

Lesson 5: A national program needs an equal focus on technical and adaptive work. Technical work involves the more quantitative components of a project, such as the science for how to measure and reduce CLABSI. Adaptive work requires changing peoples’ values, attitudes, beliefs, and behaviors. Often, improvement projects focus the majority of effort on the technical work, yet projects often fail from adaptive challenges, such as clinicians who do not support the project. In this initiative, the CUSP intervention supported adaptive issues, such as safety climate, clinician engagement, transdisciplinary interactions, and a sense of community. Teams were encouraged to improve teamwork and identify and mitigate hazards. They often identified CLABSI as a significant patient safety hazard, further reinforcing that CLABSI was an internal rather than an external concern. In addition, the national team trained local improvement leaders to lead adaptive change.

Lesson 6: A national program should start with the goal and work backwards, pulling as many levers as possible. The national team started with the goal of eliminating CLABSIs within hospitals and worked backwards, designing a multifaceted intervention — pulling levers at the national, state, and local levels to focus on eliminating infections, trading the ability to evaluate the impact of any single intervention or component for the ability to quickly reduce CLABSI rates to levels as low as possible.  Yet, the most important lever is to help encourage intrinsic motivation such that clinicians are inspired to believe they can reduce patient harm.

In addition to the national partnerships described earlier, the national team used other tactics to increase the momentum to eliminate CLABSIs.  They developed additional checklists to provide CEOs, hospital boards, and infection preventionists with tasks to support zero infections. The national team sent state hospital association executives a list of hospitals with CLABSI rates above 3 per 1000 catheter-days, and encouraged them to contact the hospital executive. Several hospitals joined the national program after their hospital’s high CLABSI rate was publically reported, and some hospitals joined because their Board required that they reduce hospital infection rates.

Lesson 7: Clinicians must believe the harm is an important problem capable of being improved. No clinician wants patients to suffer harm. Clinicians generally have profound individual accountability, especially when the causal link between their actions and the harm is short, direct, and unambiguous. When the causal link between the clinician’s actions and the harms are obscure, the sense of accountability is substantially reduced.

At the start of the national initiative, clinicians generally felt that CLABSIs were still inevitable rather than preventable. Based on the Keystone work, we hypothesized that hospitals would generally start to reduce their CLABSI rates when physicians saw CLABSI as an important social problem that was capable of being improved.

To convince clinicians that CLABSIs were preventable, teams from the Keystone project shared their experiences with the evidence-based intervention and their success in reducing CLABSIs. Face-to-face meetings brought all state-based teams together annually. These collegial sessions were designed to maximize intrinsic motivation: they were part inspiration and part education; they included problem solving and community building.

Lesson 8: Data should facilitate learning rather than blaming. The data must be reported back to the front-line staff to facilitate learning and provide feedback on the success of the initiative. When performance data are not offered, or are viewed as the responsibility of infection control personnel, front-line staff typically becomes indifferent to the work, and the project loses momentum and fails. Data can be used to motivate change when rates are high, but overemphasis on failures may encourage gaming or alienate hospitals who are participating voluntarily. Generally, program leaders should use the least coercive tactics possible, leaving stronger tactics to regulatory bodies. Moreover, these tactics are likely only effective when clinicians believe the data are valid, when trust is high, and when evidence is strong that improvement is possible.

Future Directions

Although the national initiative can offer lessons for other programs, further research is needed to better understand how to develop, implement, evaluate, and spread national improvement programs to reduce other types of harm. The Partnership for Patients initiative seeks to reduce nine types of harm plus readmissions. Although many patients are at risk for all of these harms, most hospitals will focus on a shorter harm list largely because current efforts to reduce harm require heroic efforts by clinicians with little consideration given to designing safe systems. In the future, technology and the integration of disparate technologies should play a larger role. Health care needs more efficient and effective methods to reduce all types of harms.


The nationwide reduction in CLABSIs involved many groups working collaboratively toward a common, measurable goal. The national initiative described here represented a novel collaboration among a variety of national, state, and local partners. The methods used and the lessons learned could inform other national efforts to improve patient safety.

This initiative offers hope that larger and durable improvements in patient safety are possible.  Nevertheless, patient safety efforts need to migrate from relying on clinician heroism to reduce one type of harm to designing systems that work to prevent all harms, partnering with patients, clinicians, engineers, and researchers.

Editor’s note: The full list of authors and their affiliated institutions is as follows:

Authors: Peter J. Pronovost, MD, PhD (1-3,5); Jill A. Marsteller, PhD, MPP (1,2,5); Kristina Weeks, MHS (1,2); Sam R. Watson, MSA, CPPS (6); Sean M. Berenholtz, MD, MHS (1-3,5); Christine A. Goeschel, ScD, MPA, MPS, RN (1,2,5,7); Julius Cuong Pham, MD, PhD (1,2,4); Bradford D. Winters, MD, PhD (1,2); Lisa H. Lubomski, PhD (1,2); David A Thompson, DNSc, MS, RN (1,2,7); Christine George, RN, MS (6); Rhonda M. Wyskiel, RN, BSN (1,2,8); Molly Federowicz (1,2); James B. Battles, PhD (9); Stephen C. Hines, PhD (10); Melinda D. Sawyer, MSN, RN, CNS-BC (1,2); John R. Combes, MD (10)

Affiliations: Armstrong Institute for Patient Safety and Quality, Johns Hopkins Medicine (1); Department of Anesthesiology and Critical Care Medicine (2); Department of Surgery (3); Department of Emergency Medicine (4); Johns Hopkins University School of Medicine, Baltimore, Maryland, Department of Health Policy and Management, Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland (5); Keystone Center for Patient Safety & Quality, Michigan Health & Hospital Association, Lansing, Michigan (6); The Johns Hopkins University School of Nursing, Baltimore, Maryland (7); Weinberg Intensive Care Unit, The Johns Hopkins Hospital, Baltimore, Maryland (8); US Agency for Health Research and Quality, Rockville, Maryland (9); Health Research and Educational Trust, American Hospital Association, Chicago, Illinois (10)