profession
Comparative effectiveness research faces obstacles in changing how doctors practice
■ Pay incentives, regulation and ambiguous results hamper the impact of such studies on patient care, experts say.
By Kevin B. O’Reilly — Posted Oct. 22, 2012
- WITH THIS STORY:
- » Head-to-head trials don’t guarantee changes in treatment
- » External links
- » Related content
The comparative effectiveness research program that is getting $3.5 billion in federal funding through 2019 under the Affordable Care Act aims to deliver actionable information to help physicians and patients decide among competing treatment options.
That money is meant to fix what many experts see as a market failure. To get Food and Drug Administration approval, pharmaceutical companies need not demonstrate that their drug is superior to others already approved. Once drugs and other treatments are available on the market, there is little financial incentive for companies to fund expensive studies that may show their option is inferior.
Yet an analysis of high-profile comparison trials during the past decade suggests that a lack of funding for comparative research has not been the only obstacle to using top-flight evidence to shape patient care. Even when they contradict prevailing practice, comparative effectiveness studies often fail to translate into substantial changes in patient care, said a study in the October issue of Health Affairs.
For example, a 2007 study found that patients with coronary artery disease received similar survival benefit and angina relief with medication alone, compared with medication and percutaneous coronary intervention. How did this finding affect patient care? “Little or no change in practice,” the study said.
A few factors could explain why head-to-head trials sometimes miss the mark in shaping treatment choices, said Justin W. Timbie, PhD, the study’s lead author. Fee-for-service payment systems may reward more invasive interventions, even when they are less effective. Or the relevant options being studied may get supplanted by a new treatment before the results are published, he said. By the time a key study of coronary stenting was completed, a new drug-eluting stent was in wide use.
Consensus needed on study methods
Another barrier to translating comparative effectiveness research into practice is that trial results can be ambiguous due to differing opinions about clinically meaningful effect sizes and which study outcomes are most important.
“One thing that we found in our study is that there’s generally not a shared understanding of the findings from this research,” said Timbie, an Arlington, Va., health policy researcher for the RAND Corp. “Things are just not always clear-cut. What payers and health plans and policymakers and physicians are looking for are clear messages about what works and what doesn’t. And what we typically find is that there’s so much uncertainty with what the studies are actually finding.”
One organization working toward consensus standards to help guide comparative effectiveness research is the Center for Medical Technology Policy, a Baltimore-based nonprofit launched in 2008 with funding from industry, government and philanthropic sources.
“It helps getting the methodology right,” said Sean R. Tunis, MD, president and CEO of the center.
He said people trust Consumer Reports because all the products are measured against the same standards. That contrasts with, for example, studies of how to best treat a chronic wound, where dozens of measures have been used.
Whether the Patient-Centered Research Outcomes Institute, the independent entity charged with funding comparative effectiveness research, will be able to set standards that address all the stakeholders’ concerns remains to be seen, experts said.
Regulation could be another impediment to translating comparative effectiveness studies into practice.
In a separate Health Affairs article, Eleanor M. Perfetto, PhD, and colleagues noted that if comparison trials show that a drug is superior to another treatment, the drug’s manufacturer may be barred from promoting that evidence with physicians, patients and payers.
“Comparative effectiveness studies often talk about endpoints not included in the [FDA-approved drug] label,” said Perfetto, senior director of reimbursement and regulatory affairs and federal government relations at Pfizer Inc. “The company may be hesitant to communicate that information because there may be regulatory or legal repercussions.”
The FDA should issue guidelines “to permit industry to participate fully and proactively” in communicating comparative effectiveness findings, Perfetto said. The FDA did not respond to an American Medical News request for comment by this article’s deadline.
Journal articles only a start
For their part, officials at PCORI say they are aware of the challenges they face in ensuring that comparative effectiveness trial results do more than collect dust on medical library shelves. Researchers applying for PCORI grants are being asked to formulate a plan on how to disseminate their findings beyond medical journals through op-eds, social media, town hall meetings with patients and more, said Anne C. Beal, MD, MPH, the institute’s chief operating officer.
“We want people to really focus on the fact that publishing in a peer-reviewed journal is not enough, and to think actively about dissemination,” Dr. Beal said.
Twenty percent of the tax-funded trust fund set aside for the institute will be sent to the Dept. of Health and Human Services, with 80% of that amount going to the Agency for Healthcare Research and Quality to spread the word about new findings.
A key to making study results actionable in clinical practice is to compare treatments with patients who have multiple comorbidities, Dr. Beal said.
“What we want is for the patients to represent the real world,” she said. “If all the studies have been done with patients who have only one condition and nothing else, as a provider and a patient, how can I be sure the results will be meaningful to me in that particular case?”
In May, PCORI spelled out its broad research priorities, of which comparative effectiveness is just one component. Calls for proposals to compare treatments for specific conditions will be forthcoming within three months, Dr. Beal said.
The American Medical Association strongly supports comparative effectiveness research, so long as it is focused on clinical outcomes and not cost comparisons. Research should focus on high-volume, high-cost treatments where there is evidence of significant variation in practice, AMA Executive Vice President and CEO James L. Madara, MD, wrote in a March letter to the institute.
“PCORI must ensure that [comparative effectiveness research] is designed, communicated and used in ways that recognize variation in individual patients’ needs, circumstances, preferences and responses to particular therapies, rather than encouraging one-size-fits-all solutions based on population averages,” Dr. Madara wrote.