Healthcare Data Modeling Gets Hadoop Boost - InformationWeek

InformationWeek is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

IoT
IoT
Data Management // Big Data Analytics
News
10/29/2012
10:24 AM
Connect Directly
Google+
RSS
E-Mail
50%
50%

Healthcare Data Modeling Gets Hadoop Boost

Healthcare firm Archimedes uses Hadoop and Univa software to streamline modeling operation.

20 Great Ideas To Steal
20 Great Ideas To Steal
(click image for larger view and for slideshow)
San Francisco-based Archimedes has been modeling healthcare data for two decades. The company's Archimedes Model runs on a distributed network and calculates the effects of interventions -- screening and diagnostic tests, drugs, prevention programs and so on -- on patient health, quality of life, financial costs and other potential outcomes. Its simulations are designed to answer complex yet practical medical questions for healthcare providers, researchers, pharmaceutical companies and other organizations in the U.S. and Europe.

Archimedes recently implemented Hadoop and Univa's Grid Engine software to speed up its healthcare modeling system, and to cut its hardware and software costs by up to 50%. Here's how it did it:

The Archimedes Model began as in-house project at Kaiser Permanente, where it ran on Univa's Grid MP distributed computing software. The project ran in "cycle-stealing mode" on thousands of Kaiser PCs during idle periods. The approach was similar to (but far smaller than) volunteer computing projects popular at the time, such as [email protected] and [email protected].

"We used the same technology where you install it on everybody's machine, and when the machine was idle we used those spare cycles," said Katrina Montinola, Archimedes VP of engineering, in an interview with InformationWeek.

[What is the next big data challenge? Marketers Flooded With Big Data From Mobile. ]

Archimedes' scientists ran simulations as consulting projects for healthcare and life science clients, who received the results as an Excel spreadsheet. Each project typically lasted several weeks.

The Archimedes Model later received its own dedicated cluster of about 50, multi-core, rack-mounted servers, which saved a lot of time compared to the cycle-stealing mode.

In 2006, Kaiser spun off Archimedes as a separate company. "This allowed us to use the model to help other healthcare providers and pharmaceutical companies, and to apply our model to many different applications," Montinola said.

Around the same time, Montinola set to work on improving the Archimedes Model, which was showing signs of age. "Being a research project, the simulator wasn't designed very well," she said. "So I hired a team to totally rewrite the whole thing in Java, and to re-architect and redesign it."

The company also developed ARCHeS, a Web interface for the Archimedes Model that enabled clients to run their own simulations and view the results without assistance from Archimedes' experts.

ARCHeS simulation results contain a lot of data, around 1 GB, and may include hundreds of data points for thousands of patients per year over a multi-year period.

"The process of aggregating, preparing, processing and loading the data was taking a long, long time because it was a large data set," said Montinola. "Now that our simulator was fast, it was ironic that the aggregation of the data was the bottleneck."

To solve this dilemma, Archimedes implemented Hadoop and built Aggregator, a software program that aggregates simulator data and performs calculations faster than the company's previous systems. It also enlisted Univa Grid Engine, a distributed resource management (DRM) system, which uses a single cluster to run Archimedes' simulator and Aggregator tools.

The company's Hadoop system went live in September. Archimedes estimates that Grid Engine has cut its migration-related hardware and software costs up to 50% thus far.

Montinola is pleased with the new system, but believes a little tinkering will make it even more efficient. "I'm looking forward to that," she said.

Data analytics can help the notoriously inefficient U.S. healthcare industry become more cost-effective, Montinola believes. "That's what's been lacking in healthcare all these years: a focus on improving outcomes while keeping costs in check," she said.

Big data solutions can help healthcare providers determine "the most cost-effective treatments that will improve the outcomes of the population they serve," Montinola added.

We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
Comment  | 
Print  | 
More Insights
Commentary
Learning: It's a Give and Take Thing
James M. Connolly, Editorial Director, InformationWeek and Network Computing,  1/24/2020
Slideshows
IT Careers: Top 10 US Cities for Tech Jobs
Cynthia Harvey, Freelance Journalist, InformationWeek,  1/14/2020
Commentary
Predictions for Cloud Computing in 2020
James Kobielus, Research Director, Futurum,  1/9/2020
White Papers
Register for InformationWeek Newsletters
Video
Current Issue
The Cloud Gets Ready for the 20's
This IT Trend Report explores how cloud computing is being shaped for the next phase in its maturation. It will help enterprise IT decision makers and business leaders understand some of the key trends reflected emerging cloud concepts and technologies, and in enterprise cloud usage patterns. Get it today!
Slideshows
Flash Poll