Commentary
9/9/2013
11:58 AM
Seth Grimes
Seth Grimes
Commentary
Connect Directly
Twitter
RSS
E-Mail

9 Truths Lead To Big Data's Future

Eight insights get to the heart of big data value and point to a future in which we synthesize and make sense of vast data stores.



Now and then I find myself thinking about the big principles of big data; that is, not about Hadoop vs. relational databases or Mahout vs. Weka, but rather about fundamental wisdom that frames our vision of "the new currency" of data. But maybe the new oil better describes data. Or perhaps we need a new metaphor to explain data's value.

Metaphors aren't factual or provable, but they do illuminate certain truths about topics of interest. They make complex concepts understandable, much like the following set of quotations I've collected that you could say explain basic big-data principles. I'll offer eight truths about big data -- you've surely already bought into at least a few -- ordered roughly chronologically. Then I'll take a look ahead at a "future truth."

1. "Correlation is not causation."

We hear this over and over (or at least I do). I learned one version of the underlying fallacy, when I was in college studying philosophy, as post hoc ergo propter hoc, or "after the thing, therefore because of the thing."

[ Don't be misled. Read 4 Biggest Big Data Myths. ]

You can read a smart take in the O'Reilly Radar blog, where in "The vanishing cost of guessing," Alistair Croll observes: "Overwhelming correlation is what big data does best... Parallel computing, advances in algorithms and the inexorable crawl of Moore's Law have dramatically reduced how much it costs to analyze a data set," creating a "data-driven society [that] is both smarter and dumber." Bottom line? Be smart and respect the difference between correlation and causation. Patterns are not conclusions.

2. "All models are wrong, but some are useful."

Accidental statistician George E.P. Box wrote this in his 1987 textbook, Empirical Model-Building and Response Surfaces. Box developed his thoughts on modeling, which very much apply to big data, over the length of his career. See in particular the article "Science and Statistics," published in the Journal of the American Statistical Association in December 1976.

3. Big data knows (almost) all.

If you don't already, it's time to accept Scott McNealy's 1999 statement, "You have zero privacy anyway... Get over it." McNealy was cofounder and CEO of Sun Microsystems, quoted in Wired magazine. Examples of big data's growing invasiveness are plentiful: Analysts' ability to infer sex and sexual orientation from social postings and pregnancy from buying patterns; the on-going expansion of vast, commercialized consumer-information stores held by Acxiom and the like; the rise of Palantir and Riot-ous information synthesis; the NSA Prism vacuum cleaner.

4. "80% of business-relevant information originates in unstructured form, primarily text, (but also video, images, and audio)."

I wrote this in a 2008 article, although as I said then, this bit of pseudo-data factoid dates back to at least the early 1990s. It's a factoid because it is far too broadly drawn to be precise; as far as I know, it's not derived from any form of systematic measurement ever performed. Still, per statistician Box, "80% unstructured" is a useful notion, even if not precisely correct. Whatever number works for you, text and content analytics belong in your toolkit.

5. "It's not information overload. It's filter failure."

Clay Shirky made this observation at the September 2008 Web 2.0 Expo in New York. Corollaries of Shirky's filter observation are truisms such as, "More data does not imply better insights," which happens to be one I made up. But don't overdo it; avoid what Eli Pariser terms "the filter bubble," an inability to see beyond what automation makes immediate.

6. "The same meaning can be expressed in many different ways, and the same expression can express many different meanings."

So say Googlers Alon Halevy, Peter Norvig and Fernando Pereira in their touchstone March 2009 IEEE Intelligent Systems article, "The Unreasonable Effectiveness of Data." How is data's unreasonable effectiveness revealed? Via semantic interpretation of "imprecise and ambiguous" natural languages and by tackling the scientific problem of interpreting massive, aggregated content by inferring relationships via machine learning.

7. "Big data is not about the data! The value in big data [is in] the analytics."

Harvard Prof. Gary King said this, in effect spinning out the Googlers' (see quote number six) thoughts. Yet I can't completely agree with King. There is value in the business process of determining data needs and devising a smart approach to collecting and structuring the data for analysis. Analytics helps you discover that value, so my preferred formulation would be, "the value of Big Data is discovered via analytics."

My thinking isn't original. See, for instance, "Big Data, Analytics, and the Path from Insight to Value," by Steve LaValle, Eric Lesser, Rebecca Shockley, Michael S. Hopkins and Nina Kruschwitz in the December 2010 issue of MIT Sloan Management Review.

8. "Intuition is as important as ever."

So says Phil Simon, who wrote Too Big to Ignore: The Business Case for Big Data," published earlier this year. (I contributed material on text analytics and sentiment analysis.)

Simon explains, "Big data has not, at least not yet, replaced intuition; the latter merely complements the former. The relationship between the two is a continuum, not a binary." Tim Leberecht explores this same point in a June article for CNN, "Why Big Data will never beat business intuition."

Finally, these eight points lead to a future truth, an appraisal that I believe isn't yet widely understood:

9. The future of big data is synthesis and sensemaking.

The missing element from most solutions is the ability to integrate information across sources, in situationally appropriate ways, to generate contextually relevant, usable insights. I'll pull some defining quotations from an illuminating paper by design strategist Jon Kolko (admittedly applying them out of context). First, Kolko cites cognitive psychologists who have been studying the connections between problem solving and intuition, who "reference sensemaking as a way of understanding connections between people, places and events that are occurring now or have occurred in the past, in order to anticipate future trajectories and act accordingly."

(Kolko's source is "Making Sense of Sensemaking 1: Alternative Perspectives" (2006) by Gary Klein, Brian Moon, and Robert R. Hoffman in IEEE Intelligent Systems. See also their "Making Sense of Sensemaking 2: A Macrocognitive Model.")

Kolko sees [design] synthesis as a key element, a "sensemaking process of manipulating, organizing, pruning and filtering data in an effort to produce information and knowledge." What capabilities are afforded? IBM Fellow Jeff Jonas says "general-purpose" sensemaking systems will colocate diverse data in the same data space. Such an approach enables massively scalable, real-time, novel discovery over an ever changing observational space."

Isn't that our big data goal, to advance from pattern detection to actionable conclusions? I hope my nine truths have helped you understand the path.

Seth Grimes is the leading industry analyst covering text analytics and sentiment analysis. He founded Washington-based Alta Plana Corporation, an IT strategy consultancy, in 1997. Follow him on Twitter at @SethGrimes.

Making decisions based on flashy macro trends while ignoring "little data" fundamentals is a recipe for failure. Also in the new, all-digital Blinded By Big Data issue of InformationWeek: How Coke Bottling's CIO manages mobile strategy. (Free registration required.)

We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
Comment  | 
Email This  | 
Print  | 
RSS
More Insights
Copyright © 2020 UBM Electronics, A UBM company, All rights reserved. Privacy Policy | Terms of Service