IT Measures That Matter

With the right focus, an IT measurement program can return critical information specific to your organization.



CIOs and senior IT leadership teams often find themselves drowning in a sea of data, yet they seldom have the information they need to effectively optimize their organizations. This question was asked at a recent Six Sigma in IT conference: "What should we be measuring for our IT organization?" It typifies the data-saturated information shortage in many IT organizations.

The sheer volume of data available, coupled with the critical nature of effective decision-making by senior leadership teams, demands that large IT organizations declare the IT measurement program as a core competency. This requires allocation of sufficient horsepower (number of resources and adequate skills) to position the measurement program for success. A part-time commitment seldom yields a trustworthy repository of information that CIOs and their direct reports can leverage to guide the ship.

InformationWeek Reports

The quickest way to identify the IT organization's critical information needs is ask to the CIO: "What keeps you up at night?" While specific items for the IT measurement program aren't likely to be returned, some of the critical information needs will be identified: "Is our software quality getting better over time?" "Should we focus on reducing our system 'abnormal ends' any further?" "What is our backlog of high-severity security exposures?" "Can we prove any of this to our customers?" "What do our customers really think about us?"

DIG DEEPER
METRIC RULER
CIOs want a clear view of disparate data, and vendors are starting to offer the tools they need to get there.
The biggest critical success factor is to make sure there's support from the top. Grassroots measurement movements are a noble endeavor but are often destined for trouble when they attempt to move beyond a single functional area and represent the enterprise. Support from the top is needed to commission a specific person or team to define the enterprise standards and views.

The senior leadership team of Sallie Mae IT, for example, has established its support for the IT measurement program by making the collective success or failure of the measures included in the IT scorecard an integrated part of total compensation.



KEYS TO SUCCESS
An IT measurement program must be built on a foundation of tangible goals that add value. Just as any software development project is positioned for success by aggressively pursuing a clear definition of requirements, the IT measurement program should drive toward consensus in the definition of what's important--and therefore important to measure. This will align the efforts of the measurement program with the stated goals for the IT organization, and the overall enterprise.

Software measurement researcher Victor Basili's Goal, Question, Metric framework defines a tried-and-true measurement model on three levels:

  • Conceptional level: A goal is defined for an object for a variety of reasons, with respect to various models of quality, from various points of view, and relative to a particular environment.


  • Operational level: A set of questions is used to define models of the object of study and then focuses on that object to characterize the assessment or achievement of a specific goal.


  • Quantitative level: A set of metrics based on the models is associated with every question in order to answer it in a measurable way.

Basili's framework exploits the linkage between what gets measured and what's important (the goals). This linkage is the backbone of a successful IT measurement program.

For example, since Sallie Mae does a significant amount of custom software development, the senior IT leadership team established the goal of improving the quality of its software products. Software quality also was rated the single most important service provided by IT. A number of possible questions that could have been asked to characterize the achievement of this goal were considered:

  • How many defects are generated per thousand lines of code (known as KLOC)?


  • What level of testing effort was required to achieve an acceptable level of quality?


  • What percentage of all software defects that get identified were found by our customers in the user-acceptance testing (UAT) phase of our software development life cycle?

Measure Of Success
Sufficient Horsepower
A part-time commitment won't yield sufficient data
Support From The Top
Needed to resolve turf wars over measurement criteria
Value-Add Goals
Drive toward consensus in the definition of what's important
Customer Surveys
User feedback is invaluable in setting priorities
Don't Imitate Others
Make sure information objectives are specific to your organization

Because defect creation is a function of so many things other than the number of lines of code, KLOC wasn't selected. Business demands for project delivery as well as testing windows that often have fixed durations both compromise how well testing effort can represent software quality. Therefore, testing effort wasn't selected, either. The percentage of all software defects identified in the UAT phase was picked, for four reasons:

  • Defect data was readily available and stored in a trusted central repository.


  • It's the earliest opportunity to capture initial feedback on software quality.


  • It's an objective measure, as these defects are identified primarily by people outside of IT--i.e., by the customers.


  • It hits directly on the desired behavior of finding defects as early in the system development life cycle as possible.

The measure for this goal was intuitive: the number of defects identified during the UAT phase/total number of defects identified (excluding unit testing defects). The performance trend of this goal has been used for the past two years to drive the percentage lower year over year, which supports the goal of finding defects earlier in the development life cycle.



SUBJECT TO SCRUTINY
Any centralized support organization or initiative that isn't directly tied to the creation of products or customer services, such as an IT measurement program, will be subject to the scrutiny of cost/value challenges. The IT measurement program must be constantly challenged (internally or externally) to ensure that it's providing information relevant to the most critical issues facing the IT organization.

When trying to increase or maintain a high level of relevance for the information produced by the IT measurement program, less is definitely more. By bringing the operational measurement information into a reasonable facsimile of a control chart, the IT measurement program lets the IT senior leadership team manage by exception. Only those measures that have crossed some predefined thresholds require attention. Without the proper pruning of older measures, relevance will be compromised and the IT measurement program will be relegated to white noise that seldom provokes action on the part of its customers.

Each goal should be reviewed annually, at a minimum, and up to 25% of the goals should be retired each year and replaced by new ones that are developed in response to current business needs or are intended to drive a specific behavior.

VOICE OF THE CUSTOMER
While most large IT organizations know who their customers are, they usually struggle to quantify their customers' perceptions of IT's performance. At best, pockets of collaborative business relationships between IT functional heads and their customers provide qualitative/anecdotal appraisals.

A customer-satisfaction survey is an important piece of an effective IT measurement program. It can quiet much of the noise from customers by leveraging survey response data to understand IT's strengths and weaknesses. It also lets CIOs set clear directions and targets for improving the IT organization based on this objective data.

IT customer-satisfaction data should be collected on a minimum of three separate levels, with three distinct target audiences: corporate officers, consumers of key IT services, and project-specific customers. Based on a Likert scale, here are sample yes-no approaches for these targeted audiences.

Corporate officers: This audience should be targeted with higher-level questions that are more statements of direction and general intent, such as: (A) I am confident that our portfolio of systems is well-positioned for our company to thrive in the next three to five years; (B) IT effectively partners with my area to generate solution alternatives instead of simply taking orders; and (C) IT is very responsive to business-critical disruptions in service.

Key service customers: Help desk functions are the best example of a key service provided by IT. Given the volume of customers usually affected by key services, these survey questions must be more specific than those for the officer level: (A) When I contact the help desk, the agent is usually able to resolve my issue on the call; (B) my average wait time is less than one minute; and (C) I always receive confirmation that my issue was resolved.

Project-specific stakeholders: This audience is specifically concerned with project management and execution, so questions like these are appropriate: (A) The scope of the project was clearly defined; (B) all in-scope requirements were fulfilled by the product delivered; and (C) the quality of the end solution met expectations.



OUT OF BALANCE
IT measurement programs are often out of balance--their focus is heavy in certain areas and negligent in others. Often the information made available is simply what's easiest to produce. In the role of advisers to the senior IT leadership team, IT measurement program members must ensure that there's sufficient coverage of all critical areas.

Fortunately, most of the critical thinking here can be tapped via the balanced scorecard, which classifies information needs into four primary categories or perspectives: internal business processes; customers; learning and growth; and financial. Leveraging the balanced scorecard and its supporting philosophy will ensure sufficient breadth of the IT Measurement Program (for more, visit www.balancedscorecard.org).

Finally, avoid the pitfall of simply doing what others are doing. While some of the basic information needs of an effective IT measurement program will be the same for most companies (budget performance, infrastructure costs, etc.), these aren't differentiators for optimizing your performance. Information that differentiates must be linked to value-focused goals that are particular to your organization.

Rod Cleary is senior manager of process quality assurance at Sallie Mae, the nation's leading provider of student loans. Write to him at [email protected]

Photo Illustration by Jupiter Images

Continue to the sidebar:
Tools, Fools, And The Ability To Measure Effectively

We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
Comment  | 
Print  | 
More Insights
Copyright © 2021 UBM Electronics, A UBM company, All rights reserved. Privacy Policy | Terms of Service