Commentary
4/26/2011
09:18 AM
Doug Henschen
Doug Henschen
Commentary
Connect Directly
LinkedIn
Twitter
RSS
E-Mail

Databases Alone Can't Conquer Big Data Problems

Data-integration and data-transformation steps taken before loading the database are sometimes the best antidote to high-volume storage and scaling challenges.



When it comes to making sense of big data, the glory is hogged by database platforms such as EMC Greenplum, IBM Netezza, Oracle Exadata, and Teradata.

But sometimes simple data processing work done outside of the database can help you scale and eliminate hours or even days of processing on expensive database platforms.

Marketing data provider comScore, for example, uses data-sorting techniques to improve compression and aggregate data before it even gets to the warehouse. As a result it's saving on storage, reducing processing times, and, most importantly, speeding information to its data-hungry customers.

As I detailed late last year, comScore has been a leading source of online marketing data for more than a decade. As such it was a pioneer of big-data computing. At 150 terabytes, the company's latest Sybase IQ warehousing platform sounds big, but it would have to be many times larger if not for the company's skill at compressing and aggregating data.

The big-data leagues span from the tens of terabytes into the petabytes. That's when it becomes essential to add the power of massively parallel processing (MPP), used by most of the leading platforms, or the compression advantages of column-oriented databases (such as Sybase IQ and HP Vertica). But organizations playing at this scale also have to manage big data before it gets into the database.

To give you some idea, comScore tracks the daily Internet surfing (and mobile-access) habits of about 2 million consumer panelists who have registered and supplied their demographic and psychographic profiles. The company also takes a daily census of activity across the Internet so it can report on and compare Internet-wide behavior to that of targeted segments tracked through the panel data. As a result, comScore collects about 2 billion new rows of panel data and more than 18 billion new rows of census data each day.

That means more than 20 million rows of new data is loaded into the data warehouse each day. Of course, most every organization will apply compression to reduce storage demands. But comScore also uses Syncsort DMExpress data integration software to sort and bring alphanumeric order to the data before it's loaded into the warehouse. This improves compression ratios.

Where 10 bytes of unsorted data can be compressed to three or four bytes, says Michael Brown, comScore's chief technology officer, 10 bytes of sorted data can typically be crunched down to one byte. "That makes a huge different in the volume of data we have to store, and it streamlines our processes and reduces our capital costs," Brown says.

These days, when faced with big-data, companies are used to throwing low-cost storage and MPP horsepower at the problem. But comScore got its start way back in 1999, when disk capacities were measured in the tens of gigabytes (not terabytes) and before the likes of Netezza and Greenplum (founded in 2003 and 2004, and acquired last year by IBM and EMC, respectively) were even around.

ComScore has used DMExpress since 2000, when processing power and storage where still quite costly. The product also supports high-volume extract, transform and load (ETL) work, but these days it's marketed as "data integration acceleration software," designed to be used in conjunction with more popular integration suites such as Informatica PowerCenter and IBM InfoSphere. ComScore only uses DMExpress for sorting, filtering and aggregation, and it uses database-native capabilities, rather than yet another ETL package, for data loading.

In another example of doing the big-data heavy lifting before data enters the database, comScore uses DMExpress to aggregate the thousands of new records collected from each of its two million panelists each week. A first step is to sort the sites visited by URL, so a processing-intensive comScore taxonomy used to categorize Web sites only has to be called when the URL changes.

Instead of classifying the 40 sites a panelist might have visited one-by-one in the order they were visited, they are grouped to, say, the three sites visited overall (with 20 visits to Facebook, 12 to GMail, and eight to The New York Times listed all in a row). "That saves a lot of CPU time and a lot of effort," Brown says.

Put into production in 2009, this sorting step let comScore process daily panel updates in seven hours where it used to take 24. And monthly updates are now delivered on the fifth of the month instead of the 15th. "That's a big win for the business because our customers can get a much quicker understanding of how their campaigns are performing," Brown says.

As I recently reported in "IT And Marketing in the Digital Age", this is exactly the kind of low-latency information marketers now demand from suppliers and the kind of efficiency they're seeking internally.

Not every company operates at comScore's scale, but the lesson is that not every big-data challenge is best left to the high-powered database platform to solve. Sorting, filtering, aggregation, and transformation steps can streamline data before it gets to the data warehouse, saving CPU cycles and storage space before and after the crucial data-loading stage.

Nine times out of 10 when I hear about big data, it's all about the database and the analytics. But I'm increasingly hearing about all the work that takes place even before big data gets moved into warehouses. If you have lessons learned or smart shortcuts you can share, send me an email note. I've placed a big-data-integration feature story in the queue for this fall, and I'm looking for good customer examples.

We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
Comment  | 
Email This  | 
Print  | 
RSS
More Insights
Copyright © 2021 UBM Electronics, A UBM company, All rights reserved. Privacy Policy | Terms of Service