But the very foundation upon which ACOs are built could be shaky, making software tools only so effective. Let me explain.
Consider the Medicare-based Shared Savings Program ACO. It's a group of doctors, hospitals, and other healthcare providers that has agreed to work together to deliver high-quality care to at least 5,000 Medicare beneficiaries for at least three years. To obtain government financial rewards, the ACO has to report on 33 quality standards and show improvement in 32 of them within those three years. In a nutshell, if the ACO can prove that the cost of caring for these ACO patients is less than what the Centers for Medicare & Medicaid Services (CMS) would anticipate under the standard fee-for-services model, the ACO providers get to share in some of the savings.
CMS breaks the quality measures into four categories: patient and caregiver experience; care coordination and patient safety; preventive health; and caring for at-risk populations.
[ Looking for a PACS platform to replace an outdated system? See 9 Must-See Picture Archiving/Communication Systems. ]
In the care coordination and patient safety category, ACOs must report readmission rates for all conditions during the first and second years of the program, and in year three they must show evidence that they've lowered that rate. In the preventive health category, in year one of the program they must report the number of flu and pneumonia vaccinations they've administered; the number of depression, colorectal cancer, and mammography screenings they've done; and the number of patients who use tobacco--and then show that they've improved those numbers in years two and three.
Some ACO quality data is easy to collect. Consider the performance standard for measuring tobacco use. Most EHRs already track that, and the data can be easily gathered using tools such as Microsoft Excel and SAP Crystal Reports.
The standards on hospital readmissions, on the other hand, pose a bigger challenge. In addition to the overall hospital readmission rates, ACOs face specific requirements for ambulatory-sensitive conditions such as congestive heart failure. They must report the admission rate in the first year for those conditions, and in years two and three meet a specific benchmark.
Some healthcare providers have turned to vendors such as Curaspan Health Group to track admission and readmission stats. Curaspan uses what it refers to as a "patient transition network" that monitors exchanges between hospitals and post-acute-care providers using its proprietary DischargeCentral software. The software generates reports that tell hospitals which post-acute-care facilities and clinicians aren't responding to requests for them to see referred patients or are responding too slowly. Once a hospital knows which clinicians or facilities are dragging their feet, it can take action to correct the problem, reducing the risk of their patients needlessly being readmitted.
These are just two example of IT's ability to meet ACO standards. But if genuine accountability is your goal, technology gets you only halfway there. Providers need a proven structural model to make an ACO cost effective, and at least one prominent thought leader questions whether the current approach will do the trick.
In a recent Journal of the American Medical Association editorial, Donald M. Berwick, MD, the former administrator of CMS and one of its ACO architects, says: "The accountable care organization is a guess." Put another way, the government is guessing that this model will in fact lower costs while improving patient care.
Part of the problem is the shaky research foundation upon which the model was based. It's based in part on the Physician Group Practice Demonstration (PGPD), a series of experiments spearheaded by CMS. During that five-year project, CMS rewarded medical groups for improving clinical outcomes by concentrating on better care coordination and transitioning of care from facility to facility for patients with chronic diseases.
Recent analyses of PGPD have been disappointing. Berwick points out in the JAMA editorial that the medical practices were able to demonstrate "widespread gains in healthcare quality ... but only inconsistent and generally small effects on cost."
Equally disturbing: A more recent analysis suggests that PGPD practices may have been gaming the system by "overcoding." When compared to a control group of practices, the PGPD docs were more likely to submit codes that indicated their patients' illnesses were more serious, which in turn would make it easier to claim clinical improvements and savings when they were treated.
Berwick offers a stinging reaction to this finding: "Neither patients nor the nation are well served when administrative manipulations masquerade as changes in care. What is needed is better care, not better coding." Amen to that!