Editor’s Note: Most health policy analysts believe that better evidence about quality and value, obtained through comparative effectiveness research (CER), can drive better clinical decision making and could potentially slow the rate of growth in health care spending.  But the success of any national CER initiatives will depend on how evidence is developed, whether it is trusted, and how is used by patients, providers, and payers.

Last fall,  the Health Industry Forum of Brandeis University hosted a roundtable in Washington DC, bringing together policymakers from HHS, AHRQ, NIH, and CMS with representatives from the Health Industry Forum’s Advisory Board to discuss how public and private investments in CER can be focused to achieve maximum value. Chaired by Professor Stuart Altman, the Forum conducts independent, objective policy analysis and brings together public policy experts and senior executives from leading healthcare organizations to address challenging health policy issues.

Below is a  transcript of the Septemeber 16, 2009, Roundtable discussion, edited by Robert Mechanic, Director and Senior Fellow at the Forum, and Darren Zinner, a Senior Policy and Research Analyst there. A full list of participants and their affiliations appears at the end of this post.

MR. MECHANIC:  Thank you all for coming. Today’s meeting is the first in a series of Senior Policy Roundtables hosted by the Health Industry Forum, examining key healthcare reform issues. The Forum has been working on comparative effectiveness research (CER) since 2005, and has held half a dozen meetings examining options for the structure, function, and funding for a national CER effort, as well as physician and consumer perspectives, research methodology, and priority setting. Today, we have assembled representatives of the major constituents involved in comparative effectiveness research. This includes the primary recipients of the $1.1 billion stimulus funding comprising the Department of Health and Human Services (HHS), Agency for Health and Research Quality (AHRQ), and the National Institutes of Health (NIH).  We have also invited users of comparative effectiveness information, including the Center for Medicare and Medicaid Services (CMS), major insurers and delivery systems, as well as several biotechnology and pharmaceutical manufacturers whose products might be affected by CER findings.  Lastly, we have invited several academic experts.  Our agenda will focus on three over-arching questions:

1.   What research areas and infrastructure should we invest in first?

2.   What is the right process/structure for coordination and oversight of a robust national CER enterprise?

3.   How can we ensure that results from CER are relevant, valuable and actionable for patients, physicians, and health care organizations?

I would like to ask Dr. Hal Sox to kick off the discussion with a brief review of the IOM [Institute of Medicine] Committee on Priorities for Comparative Effectiveness Research that he co-chaired.


DR. SOX: If health reform legislation becomes law, a national program for CER will become a reality.  One of the most important tasks for the new entity will be to set a research agenda.  Therefore, the experience of the IOM committee to set initial national priorities for CER will be important in two ways. First, it will establish an initial agenda for CER.  Second, the priority-setting process used by the IOM committee will be a point of departure for future priority-setting by the CER program.             

First, I want to discuss our definition of CER, a conflation of the common elements of four or five other definitions.  There are three key elements of this definition:      

1.   CER involves comparing new things to the best available therapies in direct head-to-head comparisons. 

2.   The focus is on improving doctors’ and patients’ decision making together.

3.   The study population should be representative of clinical practice.

The sum of these key elements should focus researchers’ attention on trying to identify the clinical factors that predict a favorable response to one treatment or the other.

Our goal was to create a national priority-setting process that drew upon the needs of stakeholders as much as possible, given our limited timeframe of about 19 weeks until the report was due. The major form of stakeholder input was a Web-based instrument that was open for three weeks. It asked people to nominate condition-intervention pairs (e.g., several treatments for a specific condition).  We had a remarkable response: over 2,600 nominations from a total of 1,758 unique respondents. We eliminated obvious duplicates and nominations that were unresponsive to our definition of CER.  In the first round of voting, we had nearly 1,300 topics, which we split up among five subgroups of the committee, so that, for example, one subgroup would do all the cardiovascular disease and pulmonary disease nominations, and another would do all the neurology and pediatrics nominations. This strategy ensured that we would have a balanced portfolio that did not leave out major areas of medicine or major sub-populations.  We then spent two days discussing and re-writing each of 125 unique topics.  A final round of voting led us to 100 priority topics, which we grouped by quartile according to their summary priority score (based on priority scores assigned by each committee member voting independently) but unranked within quartiles.  Surprisingly, the category that got the highest rating, by a substantial margin, was how to translate CER results into practice, a remarkable result given that the committee did not discuss the importance of each topic (so that each committee member voted independently).

In addition, the committee recommended that priority setting should be an ongoing process, so that as problems are solved, new ones move onto the priority list. We recommended the formation of a broadly representative oversight committee to look after the public’s interest in ensuring that funds allocated for CER were, in fact, used for research that met a commonly held definition of CER.  Third, the public should have a substantive role in all aspects of the CER program.  For example, the public should be involved in helping to decide the content of requests for applications.  Researchers should engage the public to help them design their research.  The public should have a role in the study sections, and journal editors should engage the public in helping to evaluate articles that are based on CER.  These ideas have ample precedent.  Both AHRQ in the US and NICE in the UK have engaged the public in their programs for CER.  Moreover, our committee had four excellent advocates representing the public.

We recommended that a national program should support large scale clinical and administrative data sets, which can contain a record of care as it occurs in the community – a key element of the definition of CER.  The committee recommended further research on dissemination of CER findings and their translation into practice.  Also, the committee recognized that CER will pose many methodological challenges for statistical and research practices, particularly in improving the analysis of observational data.  Therefore, it also recommended supporting research and innovation in the methods of CER.  And finally, the committee was concerned that the amount of research on the IOM’s agenda for CER and the amount of money allocated to CER could exceed the capacity of the current CER workforce. Therefore, a national program should allocate some of its support to expand that workforce and to support young investigators.  I hope we get a chance to discuss some of these topics further.


MR. MECHANIC:  Thanks, Hal. I’d like to start the discussion by asking representatives from other federal agencies how they are using the IOM report. 

DR. LAUER:  When Francis Collins gave his inaugural address at NIH, he listed five priorities, and one of them was CER.  At the NIH, we’ve taken the IOM Committee’s report very seriously.  One of our first NIH-wide exercises was to go through each of the 100 priorities, and prepare a portfolio analysis against our existing activities. Do we have any current or planned projects in those areas, and if so, how adequately do they address the priorities? It turns out that for many of the priorities, we have ongoing projects or imminent plans.  As an example, the first priority listed alphabetically, was an atrial fibrillation comparison. We actually have a trial scheduled that will enroll 3,000 patients, starting in about a year that addresses exactly that priority. 

Within my own institute, the NHLBI [National Heart, Lung, and Blood Institute], we charged our staff to go through each priority related to cardiovascular disease, and compare it with our portfolio.  Where we didn’t have particular IOM priorities well-addressed, we are planning to convene experts in the field into working groups to start planning projects. So we have spent an enormous amount of time on the report, both NIH-wide and also within the individual institutes. 

But just because we have a project in a particular priority area doesn’t mean we should scratch it off the list.  Each priority in the IOM report could easily encompass 10 to 15 solid projects.  So for example, there are multiple ways of comparing strategies for managing patients with atrial fibrillation. 

MS. SLUTSKY:  From AHRQ’s perspective, we support research in the area where care is delivered.  I think it’s safe to say that the priority list wasn’t completely surprising in that it contained many items that have been of interest to health care decision makers for some time.  It has been good to have these issues so carefully laid out. It also provided us a good foundation for verifying that our operating plans for our work in comparative effectiveness under ARRA [the American Recovery and Reinvestment Act of 2009] was consistent with the IOM panel. 

As one of our major activities, we are funding ongoing horizon scanning for CER topics.  We feel very strongly that critical appraisal is a wonderful tool for not only telling us what we know from current scientific literature, but also to help identify research gaps.  Recently, we’ve broken out identification of research gaps into a separate activity with its own standalone process as a follow-on to the systematic review.  We plan to engage funders, patients, and other stakeholders in identifying and prioritizing the research gaps. We will also be creating a “citizens forum”, a framework, for the formal public engagement of citizens in CER in all aspects.  In the end, CER must be usable and relevant to people who are making health care decisions every day. 

DR. CONWAY: I was asked in the spring to be executive director of the Federal Coordinating Counsel [FCC] for CER, where we also had about three months to produce a report to Congress and the President.  Our group also focused on strategies to prevent, diagnose, treat, and monitor health conditions to improve health outcomes by developing and disseminating information.  We specifically prioritized comparative effectiveness in conditions based on prevalence, uncertainty, and potential impact, as well as the ability to evaluate across diverse patient populations and patient subgroups.  In the end, CER data infrastructure was the primary recommended investment for the Office of Secretary dollars.  Secondary investments were dissemination, translation, priority populations, and types of interventions.  Believing that investments are most powerful when they have a multiplicative effect, we gave priority to, for example, data infrastructure development that also addresses priority populations, or priority conditions. 

I think both of these reports came together well to inform the overall portfolio. Our report fed into the Secretary’s office along with the IOM priorities and recommendations.  Like the NIH and AHRQ, we are walking through the process with every proposal for the Office of Secretary dollars in ARRA, naming which IOM priority it addresses and which FCC criteria.

MS. SYREK-JENSEN:  CMS has been mostly on the sidelines for this particular issue.  We’re obviously interested in the list of 100, and have gone through it, but I think Congress is going to tell us what our role is going to be in CER.  Right now, we’re in a wait-and-see mode, to find out what type of data comes out of these projects, and how CMS can use that data.

DR. JACQUES:  Clearly, there is concern about what CMS or other payers will do with this research, should the CER engine actually start to go full blast. I think one of the things that people don’t realize is that this would, presumably, produce more evidence than CMS could possibly digest in a year through its NCD [National Coverage Determination] process. Many of the issues that CER will address are not in fact part of fee for service Medicare, for example Part D. My personal opinion is that for CER to truly have an impact, it needs to produce change at the physician level.  Rather than CMS shaping practice by coverage, which we really don’t want to do, physicians would realize that they should be doing A instead of B.


DR. PEARSON:  As fantastic as the work has been around funding over the last several months, I don’t think anybody would want to do it that way again; having to figure out how to allocate so much money so quickly.   How should we proceed going forward?  Let’s keep with the atrial fibrillation example.  How do you envision AHRQ, NIH and perhaps some other federal entity, trying to figure out how all of these pieces go together?  If NIH is funding this huge trial, what should AHRQ be funding or not funding, and how does that communication work?  Is there kind of an atrial fibrillation task force, or is it more broadly constituted?

MS. SLUTSKY:  One of the things that may not be completely transparent to everyone is that shortly after people began talking about ARRA support for CER, AHRQ and NIH started meeting.  The NIH has a long history of funding complex clinical trials and AHRQ has a long history of supporting systematic review and identifying research gaps.  AHRQ also funds new studies that are more pragmatic in nature.  This gave us an opportunity to explore how we could actually use the additional dollars to fund more complex studies involving patients who might traditionally have been underrepresented.  AHRQ and NIH quickly started getting together and making sure that we had a home for those research gaps, and that each organization knew what the other was doing.  And so, when AHRQ designed its operating plan, we wanted it to be useful to NIH and other funders of CER.

DR. CONWAY: Similarly, the FCC wanted to identify research gaps unlikely to be addressed through other organizations.  Since our report is speaking primarily to the Office of Secretary dollars, we were looking for opportunities that may not be funded in the private sector, or other federal entities like AHRQ or NIH.

In my personal opinion, coordination of how these initiatives come together is incredibly important.  If we think about cancer registries, as an example: CDC funds cancer registries, NCI funds cancer registries, and by the way, there are numerous cancer registries funded in the private sector.  We need to be thinking about how all of those come together, which is difficult.  It is incredibly important for us to think strategically about how to maximize these investments.  Investments in patient registries could be very powerful for CER, but these registries could also be used for quality improvement, quality measurement and reporting, or re-used as part of meaningful use of health IT [information technology], if that’s determined as one of the mechanisms.  So a recurring theme of our group was the ability to invest in things that actually have an effect across the spectrum, and then lay a foundation for future research and other activities that we think are important for transforming the healthcare system. 

MS. SLUTSKY: This also brings up the important issue of duplication versus replication.  As scientists we must make sure to communicate that replication or expansion of a study population is not the same as duplication.  You can’t always roll every patient population or every single aspect of care into the same study. 


MR. MECHANIC:  The ARRA gave us one funding model with specific appropriations to individual agencies and an overall coordinating structure.  Now we have bills in the House and the Senate that would set up a center for comparative effectiveness, as either independent or within AHRQ with an oversight board.  Is there a way to organize CER funding that makes the most sense, and would lead to better results five years from now? 

MS. SLUTSKY:  I doubt that any of us know what ultimately might happen but I like to think that form follows function.  The same priority activities need to happen, regardless of location, which require a large degree of collaboration, cooperation, and understanding of the unique roles that different types of research and researchers play.   

DR. PEARSON:  Then what are the pieces necessary to make it work together, regardless of where it lives? How do you harness all of those capacities, both inside and outside government, in a way to maximize collaboration and minimize turf battles?

MS. SLUTSKY:  We need to engage the research community in ways that we’ve not engaged them before.  The research community and the users of research need to become closer together so that research questions are relevant to health care decision makers, and those decision makers help researchers understand their contextual needs.

DR. MCNEIL:  That’s a very good point.  It’s easy to sit in a room like this and say, “Patients with atrial fibrillation need study X to compare two treatments.  Let’s get some investigators to do study X.”  But such a study may not be feasible in a reasonable time frame.  And what is feasible may not provide the data that your agency would be proud to provide for improved patient or physician decision making.  Take proton beam therapy as a popular example for a potential comparative effectiveness study.  It’s a technology that is not wide-spread, and there are various reasons it is located in one site versus another.  Do we believe that registry data from these different sites would lead to conclusions that would be widely applicable?  Similarly, do we believe that those sites that have already spent millions of dollars on the equipment will do a RCT [randomized clinical trial]?  I don’t think so. 

Another issue deals with new types of infrastructure that are needed to optimize collaboration.  Cleary new sites for patient accrual are needed so that conclusions will be more generalizable.  But, these community and field sites seldom have experienced investigators or support personnel to make the system work.  It will be challenging to develop an expanded infrastructure in a timely fashion. 

DR. LAUER:  When I was at Cleveland Clinic, one of the most common complaints that I heard from young investigators was, “Well, this is all very well, and I find this work incredibly interesting, but nobody’s getting funded.  Why should I do this?”  Now, if developing a CER infrastructure or CER enterprise means that there’s going to be a greater devotion of federal funding to clinical research, that’s got to be a good thing, and money will make things happen.  It will lead to more young investigators going into this field.  It will lead to the building of new resources.  It will get investigators engaged.  So I think one critical element is just the fact that the money is being put out there, for clinically relevant research. 

The other key component is communication and collaboration.  There is no greater stimulus to make that happen than money.  So when we put out RFA’s or RFP’s in which we say, “We have set aside a certain amount of money, and we will pay if you can bring folks who normally don’t talk to each other together, to do a certain kind of a project,”  that will make it happen. 

So I think that there are two critical themes.  Number one is the actual financial commitment.  The second is encouraging maximum communication because science is inherently messy, because replication is critical, and because the best science happens when you bring together people from different disciplines and backgrounds.

MS. SLUTSKY:  We can’t ignore the workforce and infrastructure needed to do CER on a large scale. Crosscutting throughout our activities are investments in institutional training, career development, and methods research.  Additionally, the importance of the needs of decision makers and their continued input to the process has been emphasized. 

DR. LAUER:  Another part of the vision is a much greater degree of public engagement in the research enterprise.  Relatively few patients have ever actually participated in research studies.  There have been some exceptions.  In pediatric oncology over the last 40 or 50 years, the participation rate in clinical trials has been enormously high because parents and the medical community have been engaged.  The results speak for themselves.  Death rates from pediatric cancers have dramatically declined over the last 40 to 50 years.  There is still a lot of work to do, but that’s an example. 

There is a hospital in Munich, the Munich Heart Center, where I have been told by one of their staff that 90 percent of the patients admitted to the hospital are enrolled in a clinical research study.  They do trials and have an excellent patient registry.  There have been two consequences.  First, they do great research, and some of their results have directly affected clinical practice.  Second, the quality of care that’s provided is amazing, because if every patient is in a protocol, you’re going to make sure that everything is done right.  From the patient’s point of view, not only do you get to be part of the clinical research enterprise and advance knowledge, but you’re probably getting much better clinical care than you would be getting anyplace else.  That’s not the situation for most care in this country.  It’s given completely outside of the research enterprise.  One nice consequence would be a much greater degree of engagement among patients and physicians to get a much higher level of participation in research. 


MS. PAYNE:  What’s the role for the providers?  I work for Ascension Health, which is the largest not for profit health system in the country.  When the stimulus bill was passed, there was all sorts of interest from our providers about how we could contribute.  We have lots of data.  But a lot of the funding seemed to be directed toward narrow projects that were specific to certain priorities that you had established.  There really wasn’t any infrastructure funding to figure out how to create data sets.  In comparison to Europe or Britain where all the hospitals are contributing, maybe we should take advantage of this opportunity, especially among providers that are interested in doing this.  What’s the vision for using data that’s out there already but not in a usable form, or that’s usable but nobody is using it?

DR. LAUER:  We plan to fund a number of projects that involve large data sets and large data registries.  But you raise a great issue, and it dovetails with what I was saying before, that if monies are made available that are specifically dedicated to this kind of research, it would be great to bring together a health system like yours with researchers and information gurus who analyze your data. 

DR. CONWAY:  I think that is certainly a possibility, whether it’s through the Recovery Act or longer off.  There may be opportunities for big delivery systems to contribute some common data infrastructure elements.  For the dissemination and translation piece, delivery systems could receive grants or contracts to implement programs and then measure results.

DR. MCNEIL:  But I assume these are data from an EMR [electronic medical record] or something similar.  Can these be used to definitively answer our questions? We need to be clear about the potential uses of observational data.  We have talked a lot about EMRs, but as of now we do not yet have a series of rich experiences with multiple different EMRs with different data elements.  Thus, beyond safety we have little information about how the EMR of today (or the slightly improved EMR of tomorrow) will lead to definitive answers, particularly when considering comparisons of tests and treatments.  I think it is more likely that these existing data systems will be more useful for identifying the effect of changes in the process of care.

MS. SLUTSKY:  I think it is very important to consider the role of hypothesis generation.  Large data sets can be very useful for framing contextually what issues one might want to invest in. But you can’t [often utilize these data in their current form] and we need to be constantly aware of the tradeoffs between retrospective and prospective analyses both in terms of causality and external versus internal validity.

DR. LAUER:  Two of the strongest areas for these very large data sets are in hypothesis generation and finding safety signals.  This is what the FDA [Food and Drug Administration] Sentinel Program is all about. Evaluating changes in systems is more difficult unless it’s done in a systematic way — no pun intended.  Let’s say you decided you’re going to implement a quality improvement program.  Some investigators have successfully worked with large health plans to put together either a randomized cluster trial or a time series trial where you roll it out sequentially.  And you randomize the rollout so that Site A will be randomized to get it before Site B, which will be randomized to get it before Site C.  That kind of a study is actually quite robust, and you can learn a lot from it. 

The “Technology A versus B” study is probably one of the more popular and most problematic because of the issue of confounding by indication.  Without randomization, you’re never sure if the population who got A was substantially different than [the population who got] B, and whether you have selection biases. So the “Technology A versus B” study is helpful if you’re generating hypotheses.  It’s also useful for extending the results of clinical trials to underrepresented patient groups.

DR. MCNEIL:  When I talk to groups that are not as educated or experienced in comparative effectiveness as this group, they generally believe that CER studies are restricted in scope: they say, “Aha! It’s Test A versus Test B. Or Treatment A versus Treatment B. That’s going to give us the answer.”

MS. SLUTSKY:  That is one of the myths that is important to dispel.  In actuality, most care is not delivered with decisions that center around drug A versus drug B or device C versus device D.  Most care decisions are more complex and often revolve around making decisions between pharmaceutical treatments versus surgery versus watchful waiting, for example.

DR. PERFETTO:  We’re trying within the industry, as well, to dispel that myth.  I like to think about it as managing expectations around CER.  We have some people not too far down the street here in Washington DC who think that the magic study is going to come out of NIH and AHRQ, and it’s going to give us the exact answer we need.   There are people who think that one study, not 10 or 15 studies, is going to give us the answer.  I think we have to manage these expectations because this is looked upon as though we’re going to solve the healthcare crisis by just doing a few studies.  They think we’ll push the F7 button and have these magic databases that will help us do all of this.  These people naively think one study will fix this, and they panic at what that one study will be.  I know that my industry colleagues who are here and I spend a lot of our time talking our coworkers down off the ledge every day over this.


DR. GILLESPIE:  As a recipient of knowledge and not a creator of it, I want to speak to what I think is the other side of the obligation here: cycle time and speed of implementation.  As a practicing physician, 80 percent of the decisions I make every day are based on my experience and expertise, rather than on evidence.  Because I’m impacting individuals every day, the speed to market for this effort is quite important.  We need to step up towards that need and not just say, “Well, we’ve got to have Class A evidence in order for us to feel right as a research community.”  I think we need evidence that is good enough to be better than what I’m currently using to direct patient care.  The research community needs to push itself on what’s good enough from an evidence basis to get faster cycle times.  I think that will create more support outside of government for the kinds of work we’re talking about here. 

If we can get some quick hits, some things out of the priority list that are more easily concluded than others, should we not get those to the top of the list? Let’s get them done so we have a track record of success with a few of these things to demonstrate the validity of what we’re trying to do. I’m running down the field, ready to catch the pass.  When is the ball coming so I can actually do something better?

DR. LAUER:  This is a very interesting and murky question.  There are situations where treatments and technologies are adopted way too fast, and they wind up doing an enormous amount of harm.  When I was a cardiology fellow, I was taught that hormone replacement therapy should be standard cardiovascular treatment.  Lots of little studies had been done and none of them were really particularly good.  They were based on surrogates.  They were observational and they all seemed to suggest that there was a benefit. But in the end, how many women were hurt because of this?  Another example: bone marrow transplantation for breast cancer was so widely accepted that there were lawsuits because some health plans were refusing to cover it.  Once the proper studies were done, it turned out that it didn’t work.  Today, a more controversial topic is PSA [prostate-specific antigen] testing.  I think 75 percent of men over the age of 50 get a PSA, and now clinical trials have been completed, suggesting that’s probably not all that effective. 

Now, sometimes it’s done right.  CT [computerized tomography] scanning for screening lung cancer has not been widely adopted, which I think is a good thing.  The National Lung Screening Trial is now underway where 50,000 patients have been randomized to get either a CT or a chest X-ray.  They are now in the follow-up phase and we’ll have an answer in a year or two.  If it turns out that the patients who got a CT had a better outcome then that practice can be implemented and lots of people can be helped.  If it turns out that in fact they have a worse outcome or a no better outcome or are subjected to a lot of unnecessary procedures, then we have saved a lot of trouble. 

Some people write that this is not just a policy issue or a scientific issue, but an ethical issue.  If we’re pushing for a woman to get hormone replacement therapy or to get bone marrow transplants for metastatic cancer without having proper evidence that these treatments really work, we’re making them guinea pigs, but without a proper experimental design, and without proper consent.  I think the question you are asking is right on.  It’s a very, very important question, but often, the assumed answer is that we need to do things fast, and we need to get things adopted, even if we don’t have the best evidence available.  That’s a big mistake. 

DR. CONWAY:  I support what you’re saying, but, I think there’s a tendency to be too conservative.  The examples you’ve provided are centered on harm that was done by not having enough evidence.  But we don’t really scrutinize where we could move a little faster and deliver something to the practicing community.  When we think about implementing evidence, we can tell practitioners that more evidence is on the way.  I think practitioners get this.  But we still need to think about the level of evidence and the question. 

DR. SOX:  Imperfect evidence doesn’t provide certainty; it gives you a step in the direction of certainty.  Quantifying the level of certainty that a research finding is true, is a challenge that experts in Bayesian statistics are eager to take up.  In a Bayesian statistical approach, each research result updates the probability that the research result is true.  Having perfect certainty may not really be necessary for some decisions.  There may be a threshold probability above which we decide the level of evidence is high enough to make a policy decision, such as the content of a practice guideline.


DR. MCNEIL:  The priority to deliver better healthcare by translating CER into practice could be interpreted in one of two ways.  One is an increased need for evidence about treatments for, say, atrial fibrillation.  The other is the need for delivery sites to make sure that whatever they deliver is done well.  This dichotomy is important to discuss with regard to the CER mandate.  Are we talking about new evidence or translation of evidence?  If there is $1.1 billion available, should we be spending 80 percent of it on collecting new data and only 20 percent on figuring out how to implement those findings?  Or should it be the other way around?

MS. SLUTSKY:  We have a lot of work in the pipeline from both NIH and AHRQ that could easily be translated now, so I don’t think we’re taking from one area – new evidence – to fund another, translation.  It is important to make sure that we provide an incentive for innovation in how we translate and implement evidence.  Many studies and systematic reviews have shown there is no one right way to do this, but a variety of activities.  Translation and implementation really have to be tailored to the setting and population that you’re working with. 

DR. LAUER:  Both are important, but there is clearly a need for more evidence generation.  As exemplary of that, in cardiology, we like to think of ourselves as being rather evidence-based.  Sid Smith and Rob Califf did a review of the current recommendations and guidelines that have been put out by the American College of Cardiology and the American Heart Association. What they’ve found is that only 11 percent are based on Class A evidence — that’s multiple, randomized trials — and 50 percent are based on opinions, consensus opinions and case reports.[2]  Here we are in a field that prides itself on being more evidence-based than others, and yet when you look at those data, much of what’s going on in practice is not based on real evidence.

DR. CONWAY: I can’t give an exact figure, but you could imagine an HHS investment in dissemination and translation of about 15 to 25 percent of our stimulus funding, given that we named it as a secondary investment. 

MR. MECHANIC:  It’s an old cliché, but everybody talks about the 17 years it takes for a finding to move from discovery into practice.  How are people thinking about accelerating that? 

MS. SLUTSKY:  It is important to foster innovation in approaches to dissemination and implementation, as well as trying to reach populations that have traditionally been outside of this sort of information stream.  Science is and should be dynamic; it’s rare that only one study is necessary to change practice.  This is one way that critical appraisal and systematic review can look across a body of literature and translate both uncertainty and certainty to address decision maker needs.  Also, it’s sometimes just as good to know that there is no evidence. 

We’re also looking at innovation in translation activities, and what we ant to disseminate.  Right now, a lot of what we disseminate comes at the beginning and end of the pipeline, depending on research gaps or a systematic review.  Hypothesis-generating studies are important to communicate to the research community, and to the funding agencies that “this lays out unanswered questions that you really need to think about.”


DR. PEARSON:  I see this question centered on how we can ensure that CER results are relevant, valuable, and actionable.  That means that we have to make sure that the results do not just diffuse, but that they actually affect behavior.  The fear side is that they are going to be overused or misused.  We need some process for making sure that inappropriate actions aren’t taken on the basis of limited evidence.  I’d like to ask the representatives from pharmaceutical companies, what are your concerns about how this information could be interpreted?

MS. BRYANT-COMSTOCK:  From a pharmaceutical industry view, I believe some of the fear is that the information will diffuse out in a way that is narrowly interpreted to an “A versus B” decision.  I don’t see this happening, but there is a general concern of how the information will get diffused.

MS. SLUTSKY:  It is important to try and understand what underpins the resistance and fear of CER to tease out what people are frightened of.  I think we all have some of the same goals: trying to understand what’s happening to patients that fall out of the norm, trying to answer that hallway conversation that you have at 11:30 at night when your kid is showing up with bizarre abdominal pain.  But I also think we need to push through some of these barriers, which are driving us away from a common goal.  Many people are fearful that there are going to be winners and losers in CER.  But that just isn’t the case.  People just aren’t biologically made that way.

MR. MCGOWAN:  I think this relates to the question about whether CER will be given a chance to operate in practice.  Assume that we’re down the road five or ten or fifteen months, and there are findings from CER that are at odds with advertising, the interest of manufacturers.  I’m wondering, what are the implications of conflicts between CER findings and the interests of manufacturers, pharmaceutical, device, even group practices that are identified with certain types of choices? 

DR. JACQUES:  I think those pressures already exist when you look at the amount of suppressed research data that never sees the light of day.  We’re just talking about the same pressures.  But could you overcome them if you had a more transparent design and conduct of these studies rather than what happens now, which is more proprietary?

MR. MCGOWAN:  We already have a situation where a significant number of patients come in and basically self-prescribe.  They ask their physicians for a drug because they’ve been influenced by advertising to develop the expectation that that’s the appropriate drug for them.  Maybe even influenced to the point of having symptoms consistent with the need, and that’s just one example.  When comparative effectiveness is a reality and it contradicts that kind of advertising, what’s going to happen?  Manufacturers are pretty smart.  They understand what’s coming. 

DR. PERFETTO: In the pharmaceutical industry, we keep saying to our colleagues, “But if we can do the right kinds of studies in the right kinds of populations, maybe our product won’t be best for everyone, but it will be best for certain subpopulations.  That’s our market, and we are most effective there.”  I’ve been doing this since 1990, and it’s taken that long for this lesson to sink in.  This largely has to do with the visibility that CER has gotten over the last few years, but we have people at the very top listening now about making sure that we’re doing the right studies.  We are revamping the kinds of studies that we do and the way that we think about this.  Also, I think the world has also come to the realization that the question isn’t “Drug A versus Drug B,” but rather it’s “Cocktail A plus radiation plus surgical procedure versus Cocktail A, maybe without the radiation.”  Treatment is much more complicated than that, and that’s really the question we need to get at. 

Jean is right that maybe there was a knee-jerk reaction of, “Oh, no, we’re going to have all of these problems and this is going to be on us.”  But I think the tide has shifted.  Moving the pharmaceutical industry is like turning the Queen Mary, but we’re doing it, and I think that’s a very positive sign. 

DR. CONWAY:  If there are patients segments where I can show that my drug or therapy combination works well, it completely incentivizes value and personalized medicine.  Whether you’re industry, a patient, or a provider, this is a win-win-win.  So I think, “How can you be scared of this?”  Obviously, there are some communication issues that we need to work on, and we’re certainly trying to do that.

DR. PERFETTO:  So I think that the tide has turned, and we see that in the IOM and FCC reports. There are actually two shifts that have happened.  The industry is changing in how it’s thinking about CER, but CER has also changed.  We’re now thinking about treatment much more holistically. We’re thinking about patients and treatment much more holistically.  We see that in these reports.

DR. SOX: I’d like to just pick up on this point.  CER is about trying to find out what’s best for the patient in the room and to develop a scientific basis for that, and I think you made that point very powerfully.

MR. MECHANIC:  We had promised to end on time, so I’ll wrap this up.  Thank you to Hal and Patrick for their presentations and to everybody else for a great discussion.


•  Lynda Bryant-Comstock, Director, Medicare Quality & Patient Outcomes, GlaxoSmithKline

•  Patrick Conway, Chief Medical Officer, Assistant Secretary for Planning and Evaluation, U.S. Department of Health and Human Services

•  Robert Donnelly, Senior Director, Health Policy, Johnson & Johnson

•  Jennifer Druckman, Health Policy Advisor, Ascension Health

•  William Gillespie, Chief Medical Officer, EmblemHealth

•  Louis Jacques, Division Director, Coverage and Analysis Group, Centers for Medicare and Medicaid Services

•  Steven Kelmar, Senior Vice President for Policy and Governmental Affairs, Aetna, Inc.

•  Michael Lauer, Director, Division of Cardiovascular Sciences, National Heart, Lung, and Blood Institute

•  Daniel Leonard, President, National Pharmaceutical Council

•  Brian J. Maloney, Associate Director, Federal Health Policy, AstraZeneca Pharmaceuticals, LP

•  Daniel T. McGowan, Board Member, Health Industry Forum

•  Barbara McNeil, Ridley Watts Professor of Healthcare Policy, Harvard Medical School

•  Robert Mechanic, Senior Fellow and Director, Health Industry Forum, Brandeis University

•  Mary Ella Payne, Vice President, System Legislative Leadership, Ascension Health

•  Steven Pearson, President, Institute for Clinical and Economic Review

•  Eleanor Perfetto, Senior Director, Reimbursement & Regulatory Affairs, Pfizer, Inc.

•  Murray Ross, Vice President and Director, Kaiser Permanente Institute for Health Policy

•  Jean Slutsky, Director, Center for Outcomes and Evidence Agency for Healthcare Research and Quality

•  Harold Sox, Co-chair, Committee on Comparative Effectiveness Research Prioritization, Institute of Medicine

•  Tamara Syrek-Jensen, Director, Coverage and Analysis Group, Centers for Medicare and Medicaid Services

•  Darren Zinner, Senior Policy & Research Analyst, Health Industry Forum, Brandeis University