Can Data-Powered Comparative Effectiveness Research Save Healthcare? - InformationWeek

InformationWeek is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Healthcare // Clinical Information Systems
02:27 PM
Paul Cerrato
Paul Cerrato
Connect Directly

Can Data-Powered Comparative Effectiveness Research Save Healthcare?

Mounting evidence suggests CER will deliver new, cost-effective treatment options. But at least one controversial problem needs to be resolved first.

With so much emphasis from government and private insurers on the need to lower the cost of medical care, comparative effectiveness research (CER) has come into its own. CER aims to compare two or more existing treatment regimens to determine which are most cost-effective. Since so many sophisticated software tools are now available to help facilitate such research, healthcare IT executives need to stay well-informed about the strengths and limitations of CER.

In the past, I've written about Clinical Query, a searchable patient data repository being used by Boston's Beth Israel Deaconess Medical Center to facilitate CER. Last year the database was launched to allow researchers and clinicians to look for potential connections between diseases, treatment options and risk factors, which in turn can become the jumping off point for a research project.

If a Harvard researcher wants to compare the benefits of diuretics to ACE inhibitors among patients with hypertension, for instance, he or she can use Clinical Query to look at the records of more than 2 million patients and 200 million data points, including diagnoses, medications taken, lab values, and radiology images.

[ Technology can help improve healthcare, but it's not a cure-all. Read IT Can't Fix Complex Healthcare Problems. ]

A comparison of data on the two classes of high blood pressure meds might reveal that one is more effective than the other. And while the results of that CER analysis may not carry the same weight as a randomized clinical trial in which groups of patients are actually given the drugs in real time to see which was more effective, the CER results can still guide clinicians on treatment options for their patients.

A CER Network Could Transform Medicine

During a recent conversation, John Halamka, MD, CIO at Beth Israel Deaconess, pointed out that Clinical Query is just the beginning of much more ambitious attempt to aggregate not only the 2 million patient records in their system but the tens of millions of records from major healthcare systems nationwide.

"For comparative effectiveness research, you may need 10 million, 20 million patients," Halamka said. "So wouldn't it be much better if you had a CER network, where Stanford, UCLA, Harvard and Mayo Clinic all decided to share [de-identified] patient data?" Grants from the Patient-Centered Outcomes Institute (PCORI), a federally sponsored agency, are going out to various organizations to turn this proposed network into a reality.

In April, PCORI laid out its grand vision of creating a National Patient-Centered Clinical Research Network to help improve CER. At the same time, it announced a funding program to support the network.

PCORI's vision has huge potential for improving clinical practice. One of the current shortcomings of clinical research is that so much of it is limited by the small number of patients enrolled in each study. In fact, several potentially valuable treatment options have been discarded because investigators were not able to detect a statistically significant difference between options A and B. Many of these investigations were guilty of what's referred to a Type II error, in which a treatment regimen is deemed useless simply because the number of patients being evaluated was too small to spot a therapeutic effect.

More than 25 years ago, a critique found 71 "negative" studies published in respected medical journals had prematurely condemned potentially valuable treatments because too few subjects had been included to correctly conclude the treatment was useless. Decades later, a second analysis revealed researchers were making the same mistake. A JAMA review found 383 randomized controlled trials (RCTs) were not large enough to detect a 25% to 50% difference between an experimental and control group. Studies that take advantage of a network that includes millions of patients are far less likely to fall into that trap.

Massive Databases Don't Guarantee Success

A massive network of EMR-derived clinical data would be invaluable, but large numbers aren't enough. A database like this can serve as the starting point for a powerful observational study that could reveal, for example, that 10,000 patients taking penicillin for strep throat fared better than an equivalent number of patients taking a more expensive antibiotic. But such correlations don't establish a cause and effect relationship. Randomized controlled trials are much better at that.

The other danger in putting too much faith in large CER studies that rely on EMR data is summed up by Tomas Philipson of the University of Chicago and Eric Sun of Stanford University. Their report, Blue Pill or Red Pill: The Limitations of Comparative Effectiveness Research, acknowledges that CER "measures the effects of different drugs or other treatments on a population, with the goal of finding out which ones produce the greatest benefits for the most patients." It then quotes President Obama's comment: "If there's broad agreement … [that] the blue pill works better than the red pill… and it turns out the blue pills are half as expensive as the red pill, then we want to make sure that doctors and patients have that information available to them."

The report goes on to explain that a 2005 CER analysis found that there was little difference in the effectiveness of older, less-expensive antipsychotic drugs compared to more expensive second-generation agents. The 2005 analysis concluded that only paying for the cheaper medications would save $1.2 billion. But the CER analysis had a fatal flaw: It looked only at the effects of the two groups of drugs on an average patient. As the Philipson and Sun critique points out: "…individuals differ from one another and from population averages. Therefore, what may be on average a 'winning' therapy may simply not work for a large number of patients. Conversely, a drug that is less effective on average may still be the best, or only, choice for a sizable proportion of patients."

Philipson and Sun conclude that paying only for the cheaper drugs would have resulted in "worse mental health for many thousands of people, resulting in higher costs to society that would equal or outweigh any savings in Medicaid costs."

The data that electronic health systems are creating will have a profound effect in shaping healthcare reform. Using that data well will depend on a deeper understanding of CER's strengths and weaknesses.

We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
Comment  | 
Print  | 
More Insights
Newest First  |  Oldest First  |  Threaded View
User Rank: Apprentice
7/1/2013 | 4:05:47 AM
re: Can Data-Powered Comparative Effectiveness Research Save Healthcare?
I agree, this is the approach just seems right for the healthcare industry. There is so much data out there that we canG«÷t keep limiting ourselves to a small sample of our population when it comes to testing effects of different drugs and treatment plans. Not everyone has the same genetic makeup and so not everyone will react the same way, but if we collect and analyze enough information we would be that much closer to creating personalized solutions for these patients.

Jay Simmons
Information Week Contributor
User Rank: Author
6/27/2013 | 9:00:33 PM
re: Can Data-Powered Comparative Effectiveness Research Save Healthcare?
The risks seem pretty manageable, really -- don't use "average." The whole push to genetic and personalized medicine begs that kind of sophistication, since drugs that don't work for 90% of the population might work splendidly for the 10% with the right genetic makeup. Good caution about the risk, but this approach still sounds like a must-do for the industry.
CIOs Face Decisions on Remote Work for Post-Pandemic Future
Joao-Pierre S. Ruth, Senior Writer,  2/19/2021
11 Ways DevOps Is Evolving
Lisa Morgan, Freelance Writer,  2/18/2021
CRM Trends 2021: How the Pandemic Altered Customer Behavior Forever
Jessica Davis, Senior Editor, Enterprise Apps,  2/18/2021
White Papers
Register for InformationWeek Newsletters
Current Issue
2021 Top Enterprise IT Trends
We've identified the key trends that are poised to impact the IT landscape in 2021. Find out why they're important and how they will affect you.
Flash Poll