Feds struggle to measure IT value

OMB’s demands drive need for better methods

The Office of Management and Budget’s demand that agencies justify their spending on information technology, as outlined in OMB Circular A-11, is one of several policy shifts in recent years that have forced agency leaders to measure the value of IT.

The same factors arise in competitive sourcing situations, in which federal employees need to provide hard numbers to compete against private-sector firms and in other settings. Although agencies have had some time to adjust, it still is often a struggle.

“We don’t have hard [return on investment] for the most part,” said Doug Brown, assistant director of the Project Management Office in the Office of IT at the Securities and Exchange Commission.

Most agency costs come in the form of payroll, he said. In some agencies, if an IT implementation or another change to business processes can increase efficiency, the agency can trim its staff to manage the costs. The SEC, in general, doesn’t work that way.

“ROI is a much softer concept,” Brown said. Companies can look at their balance sheets and see how increased efficiencies lead to higher revenues or profits. In government, employees who are freed from one set of duties by automation are often simply put to work on other things. The agency’s costs may not change or may even rise, but the quality of service to taxpayers has improved. That improvement, however, may come in ways that are difficult to show.

When contractors are involved, the lead integrator is often able to help the agency choose appropriate ways to measure effectiveness, said Sid Fuchs, president of the civilian agencies group in the IT division at Northrop Grumman.

The appropriate measures often depend on who is measuring, he said. “If you’re a project manager, you’re looking at effectiveness by cost, schedule and performance,” he said. “If you’re an end user, if you’re a first responder in the field, you’re going to measure it by, ‘Did I get the data when I needed it?’ ”

Agencies are increasingly aware of the importance of the IT user’s experience, Fuchs added. That can be a taxpayer trying to get information from the Internal Revenue Service or a police officer needing data from a federal law enforcement agency.

“People want definitive, well-defined, black-or-white metrics,” Fuchs said. “That’s difficult in a world where there are a lot of unknowns, a lot of uncertainty. What you don’t want to do is put your end user at risk.”

Some consulting firms are also working on the problem on behalf of agencies. In 2003, Accenture created the Public Sector Value Model (PSV), a methodology for measuring ROI. This year, the company established an institute in Washington, D.C., and one in London for others to exchange ideas and conduct research on measuring performance.

Accenture renamed its model Public Service Value, said Greg Parston, the newly hired director of the Accenture Institute for Public Service Value. By facilitating research and consulting through the institute, Accenture will encourage agencies to adopt PSV, he said. Accenture has put it in the public domain so that anyone can use it.

The firm’s research institute will formally open in June. “The institute is meant to be an active research center,” Parston said. “It is a ‘do tank’ rather than a think tank.”

“The company will set an initial research agenda,” he said. “We don’t want to be an ivory tower. We want to draw from what people are really experiencing.” Accenture wants to encourage thought leaders in government and academia to participate in the institute’s efforts, he added.

Parston said people go into public service wanting to serve, and tapping into that motivation can help build support for measuring performance. “If you can be more open about the ‘why’ question, you can get people to think about the ways” they perform their duties, he said. “What we can do is give people an opportunity to see how the way they work contributes to what they want to do.”

The institute will publish papers written by its research fellows. It will also produce opinion articles for magazines and use other avenues to disseminate ideas, Parston said.

Although people are becoming more familiar with measuring value, many agencies and companies often still choose measures that are too vague, said Steve Hawald, Robbins-Gioia’s solution process refinement and optimization practice area manager.

They choose measures such as, “ ‘We will delight our customers. We will have better uptime,’ ” he said. “What does that mean? These things are very subjective, and they’re not real. Most programs miss the boat at the beginning.”

Support from agency leaders is critical for achieving any meaningful measure of IT value, Hawald said. “You need to lay out the guiding principles on Day One, and try to get to the point where you’re getting the right data to the right decision-maker at the right moment,” he said. “IT projects are very complex.”

Choosing the right measures is especially challenging for public agencies, Hawald said.

“We need to come up with some standardized best practices for all the agencies. Everybody’s kind of doing their own thing. I see some that are very good and some that are awful.”

Brown emphasized the dynamics that can mask real improvements in an agency. An SEC lawyer, for example, may need 10 hours to search through boxes for a document that might not be there.

“If I give him an automated system to search for that, he can find out whether it’s there or not in seconds,” Brown said. “So he’s back to doing hundred-buck-an-hour lawyer work rather than clerical work. But we still have the same number of lawyers, so we can’t show savings on that. That’s where it gets hard doing outcomes-based analysis. There’s no good way to put your finger on the return.”

The SEC is just a few months into a second phase of an effort to analyze IT outcomes, Brown said. SEC leaders have asked program managers to create a five-year plan that defines each program’s mission and its goals. The next step, beginning in fiscal 2007, will be to identify appropriate benchmarks.

Popular methods for measuring processesAgencies are increasingly interested in measures: measuring the return on investment in information technology and measuring the software development process.

Earned value management is one way to chart the progress of IT projects. It provides benchmarks that show whether the effort is on schedule and within budget. The Capability and Maturity Model Integrated (CMMI) is a set of policies and practices that assess the degree to which software development processes are systematic, documented and standardized.

CMMI is not just for companies anymore. Agencies are also seeking CMMI ratings on certain aspects of their operations.

CMMI evolved from older CMM standards for software development, systems engineering and other disciplines. It is, broadly speaking, an evaluation of the processes that an organization uses to determine whether they are systematic, documented and repeatable. The ratings range from 1 to 5, with 5 representing the most meticulous attention to detail.

The Government Accountability Office’s scrutiny has put many agencies in an uncomfortable spotlight, said Mike Phillips, director of special programs at the Software Engineering Institute, which created CMMI and regularly updates the standard. GAO’s attention led SEI to begin work on two new extensions to CMMI to cover acquisitions and services.

“It’s because of GAO looking in on organizations,” Phillips said. “It appears they’re one of the instigators of keeping discipline in government.”

Donn Milton, executive vice president of Pragma Systems, said government agencies account for about one-third of buyers of his company’s tools, which prepare organizations for CMMI evaluations. Another half of the business comes from government contractors responding to increasing demands that they must show CMMI ratings to agencies when bidding on contracts.

— Michael Hardy