In a world increasingly dependent on software, we may need to develop the art of falling back to the most recent system that worked.

Charles Babcock, Editor at Large, Cloud

August 2, 2017

5 Min Read
Source: Aligned Data Centers

It was 1986 and as a relatively new resident of New Jersey, I looked over our auto registration renewal that had just come in the mail. Then I did a double take. The form had our name and address listed correctly, but the vehicle to be re-registered was someone else's.

I was a barely wet-behind-the-ears technology journalist but even I could see the implications. The database system sending out registration notices was malfunctioning; at least some vehicle owners were being asked to register someone else's car. With all the information on the form correct, except for the vehicle, it seemed to me it might be a modern, relational database system at work. Only relational database could mix and match information in supposedly foolproof ways. And here was an example of what happened when something in such an application went wrong.

An older, hierarchical or networked system, such as Cullinet's IDMS or IBM's IMS, couldn't retrieve information out of order. If the correct car/owner data went into the system, they'd come back out together. It might be the only thing the system could do but its inflexibility meant it would get that much right.

Indeed, N.J.'s Department of Motor Vehicles was using a brand new system based on a DatacomDB relational system and it was misfiring. A consultant had used Applied Data Research's database and Ideal fourth generation language to build the application. And they had botched the job. It was a regrettable black eye for ADR, which had good technology but wasn't able to prevent it from being misapplied.

ADR was located in Princeton, N.J. and as the New York correspondent for Computerworld, I listened as Martin Goetz, CEO, faced the storm of questions and answered honestly, regardless of the consequences. Later that year, a firm that he had painstakingly built up over 27 years went on the auction block and was acquired by Ameritech, then two years later, by Computer Associates. Its market value had been affected by events, I'm sure, and it had to be a disappointing outcome for Goetz, a true industry pioneer.

At a time when the relational systems with their ad hoc queries weren't fully trusted, becoming the poster child for a malfunctioning database was going to have debilitating consequences. Being caught exposed on the leading edge was a major hazard of the independent software business.

Despite occasional setbacks, the software industry as whole prospered and grew. I was surprised at the severe limits of what computers could do as I discovered them in an introductory course to Fortran. Gradually my own limited point view opened up to where I could see how software was constantly capturing more and more of the reality around us and putting that information to work. Whatever rules and relationships could be captured in code could also be manipulated with variables and processed with the computer's Boolean logic.

It took longer to see how the most skilled practitioners kept pushing back the limits of what could be represented, capturing greater and greater complexities. Whenever one set of goals had been achieved, they moved on to the next level of abstraction.

We now create virtual realities through which a person's digital stand-in or avatar can act on behalf of its owner. In 1984 when I started in this business, such a thing was science fiction. Digital assistants learn from our patterns what it is we like to eat, tend to buy or how we get to work. Machine learning creates giant compendiums of data on the constant operation of machines, data that when analyzed can keep engines running and plants open. Cognitive computing can take many different types of sensory data and merge them into something that at least vaguely resembles how humans perceive their world and what to respond to in it.

IBM's Watson has not only an ability to win Jeopardy but also to use the human genome in diagnosing disease. Mendel Rosenblum did what others said couldn't be done and achieved the precise emulation of the x86 instruction set in software, creating the world of virtual machines. Given the world's appetite for search, Google had to learn how to launch a hundred million containers at a time and manage them through Borg and Omega, eventually yielding an offshoot for the rest of us, Kubernetes.

Software is moving the boundaries of human capabilities forward quickly, too quickly sometimes for the average human to keep up with it. Increasingly complex software systems will attempt to manage hundreds of thousands of driverless cars in a single metropolitan area. They will try to govern all the variables involved in transporting humans on a six-month journey to Mars and bring them back.

Somewhere in the midst of all of this there is sure to be another Department-of-Motor-Vehicles screw-up, a setback where the software almost did what it was intended to do, but somehow fell a nanometer short. Its designers hadn't foreseen every eventuality. The cloud had always been highly reliable -- right up until the moment when its own design started to work against it. (See the post mortems on Amazon's 2011 Easter weekend or Azure's 2012 Leap Year Day meltdown.)

As the future unfolds, we will be relying on software more than ever, but let's never forget it's only as good as the humans that build it. In key situations, such as a missile launch or air traffic control, knowing the process of falling back to some less-complicated position after a system failure may become an art form. Our most advanced system has hit a breakpoint; let's revert to the thing that we know works, until we figure out what went wrong.

Whether such a thing is possible may determine how well we will survive and thrive in our digital future.

[Editor's note: After more than 30 years of outstanding work covering the IT community and the software sector, Charlie Babcock will be retiring on Friday. He's taking time this week to reflect, and to put our technology progress into perspective.]

About the Author(s)

Charles Babcock

Editor at Large, Cloud

Charles Babcock is an editor-at-large for InformationWeek and author of Management Strategies for the Cloud Revolution, a McGraw-Hill book. He is the former editor-in-chief of Digital News, former software editor of Computerworld and former technology editor of Interactive Week. He is a graduate of Syracuse University where he obtained a bachelor's degree in journalism. He joined the publication in 2003.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights