Businesses 'Freak Out' Over Big Data

Mortar Data CEO K Young says many companies struggle to keep up with the infrastructure and expertise big data requires. Does your enterprise have the resources it needs to take full advantage of big data?

Jeff Bertolucci, Contributor

August 3, 2012

4 Min Read

12 Hadoop Vendors To Watch In 2012

12 Hadoop Vendors To Watch In 2012


12 Hadoop Vendors To Watch In 2012 (click image for larger view and for slideshow)

The term "big data" is bandied around a lot these days, but what does it really mean? Are today's data processing tools and technicians up to the task of processing large--and growing--sets of unstructured data? And are many organizations missing the big data revolution because they lack the resources to take advantage of it?

These are just a few of the concerns of Mortar Data co-founder and CEO K Young, who provided an interesting and opinionated take on the state of big data in a recent blog post entitled The Big Big Data Freak-Out of 2012.

Young is qualified to opine on the topic. His company, a New York City-based startup, is a Hadoop-in-the-cloud service that allows you to use the Python and Pig programming languages to write Hadoop jobs directly in a browser. Mortar Data's goal is to make big data processing accessible to organizations that may not be able to afford the hardware, software, and staffing costs of an in-house Hadoop system.

So who's freaking out and why?

"In short, we’re all freaking out because old bottlenecks recently got shattered, the new bottlenecks are us and our existing tools, and mad riches are visible just over the horizon," Young writes. And not just monetary riches, but also the kind associated with using big data to help cure a variety of social ills.

[ How do CIOs really feel about adopting big data technology? Read Does IT Really Care About Big Data? ]

The "old bottlenecks" included the inability to affordably process massive volumes of data. Supercomputers could handle this, of course, but they were beyond the means of all but a few organizations. Hadoop, despite being difficult to use, fixed this bottleneck by enabling data-intensive distributed applications on conventional hardware.

Another former bottleneck was what Young calls "variety"--the need to combine a hodgepodge of data sources. Hadoop and NoSQL stores solved this dilemma by supporting unstructured data (e.g., images and video) and read-time schemas. And real-time processing systems such as Twitter's Storm, in addition to Hadoop and NoSQL stores, provided the tools necessary for high-speed data processing.

But with the old problems out of the way, two new ones popped up: 1) Hadoop is too hard to use, and 2) there's a shortage of data scientists capable of extracting meaning from all this information being collected.

The first problem may take years to solve, Young estimates, as Hadoop is undergoing a lot of innovation and won't become a mature, easy-to-use technology for some time.

"I think we've done a really good job of making it a lot easier, but there's still more work to do there," Young told InformationWeek this week in a phone interview. "It's still limited to people who have a technical background."

The second bottleneck may prove trickier to solve. Young points to 2011 study by the McKinsey Global Institute that says the U.S. could face a shortage of up to 190,000 data scientists by 2018. "We provide a platform where our users create their data pipelines, and they need data scientists in order to construct meaningful data pipelines," Young said.

The Big Big Data Freak-Out happens when organizations are segregated into two distinct classes: the "Hadoops and the Hadoop-Nots," according to Young. Major companies such as Walmart, LinkedIn, and Sears have implemented Hadoop successfully, but other groups, including nonprofits, government agencies, and mid-sized companies saddled with legacy infrastructure, lack the resources to do so.

"On top of that, these companies also lack the data scientists necessary to extract meaning from the data. So they feel like they’re drowning in big data and watching the rescue boat slowly drive away," Young writes.

On the plus side, smaller startups that aren't saddled with legacy hardware or software can get started with the Hadoop right now, he says.

See the future of business technology at Interop New York, Oct. 1-5. It's the best place to learn about next-generation technologies, including cloud computing, BYOD, big data, and virtualization. Register today with priority code YLBQNY02 and save up to $300 on passes with early-bird pricing.

Read more about:

2012

About the Author(s)

Jeff Bertolucci

Contributor

Jeff Bertolucci is a technology journalist in Los Angeles who writes mostly for Kiplinger's Personal Finance, The Saturday Evening Post, and InformationWeek.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights