Big Data Development Challenges: Talent, Cost, Time - InformationWeek

InformationWeek is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

IoT
IoT
Data Management // Big Data Analytics

Big Data Development Challenges: Talent, Cost, Time

Data variety, storage costs, and other key factors can make it difficult for enterprises to take advantage of their accumulated data.

Large enterprises are embracing big data management systems to better manage their growing stockpiles of information. But the deployment of cutting-edge technologies such as Apache Hadoop, MapReduce, NoSQL, and NewSQL is not without its problems.

A recent survey by database vendor RainStor of mid-senior level executives shows the majority of respondents understand the value of big data to their businesses. However, the speed of data creation and assorted types of information--what data management pros often refer to as "velocity and variety"--are ongoing challenges, as is the ability to efficiently analyze all of this data.

Additional concerns include the rising cost of infrastructure and data storage, and the shortage of skilled workers trained in big data technologies such as Hadoop.

RainStor conducted the survey from mid July to early August. The respondents were from a variety of large-scale industries, including banking, communications, financial services, and manufacturing.

Three-quarters of respondents said that better management of big data helps their organizations make smarter business decisions. And yet more than a third (37.5%) also said that analyzing big data is their biggest challenge.

[ The people with control over the purse strings don't always see the value of big data. So Who Is Paying For Big Data Projects? ]

Given the unstructured nature of much of this information--which may include posts on social media sites, audio and video files, logs, and clickstream data--it's no surprise that organizations are trying to find the best methods to store and analyze it as efficiently and affordably as possible.

"Mainstream companies like a bank or telco are very interested in Hadoop, because it's become cost-prohibitive to keep volumes of data in traditional databases and data warehouses," said Rainstor VP of marketing Deirdre Mahon in a phone interview with InformationWeek.

But while Hadoop offers cost savings and is rapidly gaining adherents, it has serious shortcomings as well.

"We think Hadoop has great promise," Mahon said. "It's probably not robust or sophisticated enough to run mission-critical environments, but it's moving at a much more rapid pace than we would have predicted a year ago."

When it comes to using Hadoop to replace or augment an enterprise's current data warehouse, the consensus is split pretty much down the middle. Just over half (51%) of respondents want to use a Hadoop-based environment to augment their data warehouse, while 46% want Hadoop to replace their existing infrastructure.

And when enterprises that manage their big data analytics run out of storage space, where do they turn? Nearly 30% of respondents said they opt for less expensive data warehouses. Surprisingly, more than a quarter of those surveyed said they archive data to offline tape, an inexpensive but inefficient solution in a business world that increasingly values fast analysis of information. In fact, more than 12% of respondents said it can take one to two weeks (or longer) to reinstate data saved on tape for online query.

Another challenge is getting tape-archived data back to a form that users can read and query. At a recent Teradata conference, Mahon recalls meeting people whose job it was to pull older data off tape, which was often sitting in a warehouse, gathering cobwebs.

While managing and analyzing big data is a top priority at large enterprises, accomplishing this task isn't easy. The survey provides one very good reason why: Existing IT staffers are typically experienced users of relational and columnar-type databases, but they're "not seasoned java programmers or engineers that can easily deploy and support open source Hadoop, and then provide the analytics that the business demands."

It's readily apparent, however, that enterprises are taking big data analytics seriously.

"Big data is a fact of life, and organizations have to keep data for many more years," Mahon said. "So we're excited by the notion that enterprises are taking a more serious look at reducing their costs, because they have to keep all of this data online and queryable."

New innovative products may be a better fit for today's enterprise storage than monolithic systems. Also in the new, all-digital Storage Innovation issue of InformationWeek: Compliance in the cloud era. (Free with registration.)

We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
Comment  | 
Print  | 
More Insights
InformationWeek Is Getting an Upgrade!

Find out more about our plans to improve the look, functionality, and performance of the InformationWeek site in the coming months.

News
Becoming a Self-Taught Cybersecurity Pro
Jessica Davis, Senior Editor, Enterprise Apps,  6/9/2021
News
Ancestry's DevOps Strategy to Control Its CI/CD Pipeline
Joao-Pierre S. Ruth, Senior Writer,  6/4/2021
Slideshows
IT Leadership: 10 Ways to Unleash Enterprise Innovation
Lisa Morgan, Freelance Writer,  6/8/2021
White Papers
Register for InformationWeek Newsletters
Video
Current Issue
Planning Your Digital Transformation Roadmap
Download this report to learn about the latest technologies and best practices or ensuring a successful transition from outdated business transformation tactics.
Slideshows
Flash Poll