Take the mystery out of gauging IT project success.

Josh Greenbaum, Contributor

January 14, 2005

4 Min Read

One of the ironies of complex enterprise software implementations is that an operation can fail and the patient can still survive, and even thrive. As in the world of medicine, there may still be questions about cost and quality of life. But many implementations have been poorly conceived and executed and virtually impossible to cost-justify, yet they are still called successes.

I also suspect that plenty of companies owe their very survival to the fact that a key IT project was cancelled before it got out of hand. I ran into such a case recently at a consumer goods company that shall remain nameless to protect the guilty. The CIO had chosen to swap out one ERP system for another when internal politics necessitated a long delay. Ultimately, the CIO was forced into upgrading the existing system — something he had been loathe to do for reasons related primarily to cost. Or perceived cost, as it turned out.

When the dust settled, the upgrade of the original system was running so well that the CIO scrapped the migration, stuck with the original vendor and saved his company even more money by not having to retrain his IT staff and end users on a new system. A success, one might argue, albeit not according to plan.

Which leads us to a key question about defining success and failure: What did the executives, users, IT managers and implementers have in mind when the project was conceived? At a minimum, you need a baseline of projected outcomes to judge success or failure. But good baseline data never exists. Which means there's no such thing as a good return-on-investment study. There's no good "before" data to compare to the "after" data that emerges once an implementation is declared a success.

It's almost fitting that the best way to take the mystery out of what's happening in IT implementations is to implement enterprise software. One vendor, Niku, has built a whole suite of what it calls corporate governance software. The job of this software is to pinpoint when things are going well and when they're not. By capturing project, financial, human resources, portfolio and service request data in one place, Niku claims that, at a minimum, you'll be able to answer some basic questions, such as what's in my company's software portfolio, is this the best use of our IT resources and are we really optimizing our investments. You may even be able to identify whether a particular project was a success using objective criteria, instead of relying only on guesswork.

A system like Niku's Clarity could have helped define success and failure for another project I recently reviewed. In this case everyone claimed success, from the CIO to the CEO, but the project screamed failure: no consensus on how business process change should come about, no comprehensive project plan, no training, no communications with business partners prior to go-live. The system being replaced had barely made it through Y2K and had trouble supporting basic EDI functionality, much less anything more modern or efficient — that's the only excuse anyone had to deem this project a success. Truth is, management couldn't afford to have a failure, so failure simply wasn't allowed — regardless of the facts. No one could say that things were running any faster or at a lower cost. There was no way to attribute revenue performance (which was up, marginally) to the new ERP system. There was no bottom line: Management simply didn't know if the system had been cost-effective to implement and was now cost-effective to run. No matter; success had already been declared.

The only problem with products such as Clarity is that they may be too good at what they do. Everyone would have to face reality, for better or worse. Imagine a world in which IT assets are well understood and managed, where projects are started and stopped for all the right reasons. Imagine being able to actually understand your ROI for a given implementation, and plan accurately for the next. Imagine not having to argue with managers, executives, shareholders and customers about whether an implementation was a success or not.

Wouldn't that be nice?

JOSHUA GREENBAUM is a principal at Enterprise Applications Consulting. You can e-mail him at [email protected].

About the Author(s)

Josh Greenbaum

Contributor

Josh Greenbaum is principal of Enterprise Applications Consulting, a Berkeley, Calif., firm that consults with end-user companies and enterprise software vendors large and small. Clients have included Microsoft, Oracle, SAP, and other firms that are sometimes analyzed in his columns. Write him at [email protected].

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights