TechStat: Getting into the weeds of IT management
Federal IT managers better get ready for a more intense discipline of program oversight.
Federal IT managers who have not yet gotten acquainted with TechStat ought to prepare themselves, for better or for worse.
The TechStat concept involves in-depth, metrics-based reviews of major IT programs. It has been around in the public sector, in one form or another, for more than a decade. And although it can be a grueling process for everyone involved, its earliest proponents — primarily leaders in state and local government — say the data-driven approach makes it possible to keep closer tabs on how programs are progressing and identify and address performance problems before they become costly to fix.
Federal officials are starting to think the same way.
Federal CIO Vivek Kundra introduced numerous agency CIOs to the concept when he began conducting TechStat accountability sessions in January 2010.
Earlier this year, he took it a step further by directing federal agencies to hold similar reviews internally. So far, more than 120 agency representatives have received TechStat training, and 23 agencies have conducted their first TechStat meetings, Kundra said in April.
Some agencies didn’t wait for a mandate. The Interior Department, an early adopter, established its own version of the process called iStat, which is seen as a model for other agencies’ efforts.
“We take a 360-degree approach to a particular investment,” Interior CIO Bernard Mazer said about iStat. “We’re no longer just looking at [the investment] from an expenditure perspective.”
And now the TechStat movement is really gaining momentum.
Sen. Tom Carper (D-Del.) introduced a bill in April that would codify the federal IT Dashboard, a public website that tracks IT spending, and require agency CIOs to conduct TechStat-like reviews of investments that are experiencing performance problems.
Whether Carper’s bill succeeds or not, it’s clear that government leaders believe TechStat is an idea whose time has come.
“It’s frankly overdue to be conducting this type and level of oversight on IT acquisitions,” said Paul Brubaker, chief operating officer at Synteractive and former deputy CIO at the Defense Department.
But as with any budding government initiative, there are important factors to consider — such as leadership, data quality and execution — to ensure that agencies’ implementation of TechStat is effective and sustainable.
From CitiStat to TechStat
TechStat is based on Baltimore’s CitiStat, a data-driven management system designed to monitor and improve the city government’s performance in real time.
The CitiStat program began in 2000 and continues today. Since its inception, several other cities and a few states have implemented their own versions of the model.
Under CitiStat, leaders of each city department meet on a biweekly basis to discuss performance data and answer questions from high-level officials in the mayor’s office, according to a 2007 report about CitiStat by the Center for American Progress. “If the information presented reveals underperformance, the department head faces tough questioning and is asked to come up with solutions,” the report states.
At the time of the report’s release, then-Baltimore Mayor Martin O’Malley credited CitiStat for saving the city $350 million.
Kundra had firsthand experience with a similar program, called CapStat, when he was the chief technology officer for the District of Columbia. Less than a year after becoming federal CIO, he launched TechStat.
Put simply, a TechStat meeting is a face-to-face review of an IT program by the Office of Management and Budget and agency leaders, who use dashboard information to determine a program’s performance against cost and schedule targets. A green status on the dashboard denotes an on-track program.
Powered by data from the IT Dashboard, TechStat meetings are intended to improve a faltering IT project’s performance, but they can result in a project being halted or terminated. That oversight strategy aligns with the Obama administration’s 25-point IT management reform plan, which has a goal of terminating or turning around at least one-third of troubled projects in the federal IT portfolio — worth an estimated $80 billion annually — within the next year.
TechStat “meetings conclude with concrete action items, with owners and deadlines that are formalized in a memo and tracked to completion,” Kundra said April 12 at a Senate hearing on the administration’s reform effort. “This improved line of sight between project teams and senior executives increases the precision of ongoing measurement of IT program health.”
A TechStat review can be triggered by policy interests, dashboard data inconsistencies, a recurring pattern of problems or an OMB analyst’s concerns about an investment.
Reece Rushing, director of government reform at the Center for American Progress and co-author of the 2007 CitiStat report, said he thinks federal IT is an appropriate place to apply the strategy because such programs must meet specific metrics.
“But it’s more than just the data.… It’s about looking at the data regularly,” Rushing added. “The process itself is important [because] the key decision-makers are getting together in the same room to talk about how a project is going.”
Kundra often points out that TechStat reviews of high-priority IT projects have already reduced life-cycle costs by more than $3 billion. According to the Government Accountability Office, OMB held 58 TechStat sessions in 2010, but more than 300 IT investments — totaling almost $20 billion — are currently in need of management attention.
In February, OMB shifted more of the TechStat responsibility to agency CIOs by providing them with a comprehensive toolkit for conducting reviews at their agencies.
Interior goes deep
Interior officials, who are at the forefront of adopting the TechStat strategy, began assessing their IT investments using iStat in September 2010, and many of its design concepts and templates were folded into the federal TechStat process, Mazer said.
Kundra cited Interior’s progress with iStat at a conference earlier this year and said he views Interior as a model for other agencies.
Interior’s iStat encompasses the same goals as TechStat, but it “also provides the flexibility required to support the functions and existing processes specific to the [department’s] mission and lines of business,” Mazer said.
He added that iStat is a little more robust than TechStat, with each evaluation taking about 30 to 40 days. The process is also structured to examine all of Interior’s investments, not just the ones that are having trouble. And that seems to be exactly what OMB wants agencies to do.
“It is expected that agency TechStats will and can be done at more granular levels including programs, projects and nonmajor investments,” OMB states in the TechStat guide it created for agencies.
So far, Interior has completed two iStat investment reviews, and several more are scheduled, Mazer said, adding that iStat will ultimately be used to streamline investments and eliminate redundancies.
Not a slam dunk
Now that other agencies are embarking on the TechStat process, they must figure out how to use it as an effective management and decision-making tool.
Former government officials and policy experts overwhelmingly say they see TechStat as a step forward for the federal government, while noting that it is consistent with the Clinger-Cohen Act of 1996 and an extension of the government’s capital planning and investment control process.
Tim Young, a senior manager at Deloitte Consulting and former deputy administrator of e-government and IT at OMB, said TechStat is promising because it “follows a demonstrated process in the commercial sector that has worked.”
However, agencies must address certain matters, such as establishing a proper management structure, before they can reap TechStat’s benefits.
In other words, agency leaders must take TechStat seriously as a tool that’s going to improve programs and save money. “People at the higher levels of the organization are going to need to pay attention and participate,” Rushing said. “It can’t just be a staff exercise.”
In the beginning stages of TechStat, agency leaders will likely encounter institutional barriers or issues with data sharing, openness and transparency.
“There are inherent issues of breaking down silos and getting individuals [onboard] who traditionally have not been accustomed to sharing information,” Young said. “That’s a change management process.”
In addition, experts say federal program managers should strive to make TechStat decisions based on the best available data. To achieve data quality, program managers must understand how the data they are collecting and managing is going to be used.
“I view data as an asset of the organization, and as data moves through the process, there is a different value that may be placed on it for a different reason or decision,” said William McVay, an executive adviser at Booz Allen Hamilton and former deputy branch chief for information policy and technology at OMB. “Everyone needs to understand the requirements for creating the asset.”
Recent findings from GAO indicate that agencies still have some work to do in terms of data quality. A GAO study published March 15 concluded that agencies had uploaded inconsistent or erroneous data, failed to submit data and/or used unreliable information when reporting to the IT Dashboard.
Limitations and pitfalls
No government initiative is perfect, so agencies must not only understand how to make TechStat work but also recognize its shortcomings.
Problems with TechStat generally do not stem from the approach itself but how it’s implemented, experts say.
One factor to consider is the timing of TechStat sessions. In other words, are they part of an ongoing management strategy or just a troubleshooting tool?
Robert Behn, a professor at Harvard University’s Kennedy School of Government and author of “The PerformanceStat Potential,” slated for release next year, said he was struck by the fact that TechStat is a noncyclical process. Sessions are held only when there is a problem, instead of quarterly or monthly.
Although OMB expects agencies to do more granular reviews, it’s unclear how often agencies will actually conduct them. OMB didn’t respond to requests for comment.
In any case, a cyclical approach could create problems of its own. Behn said preparing for a TechStat review seems time-intensive, an observation that is backed up by looking at the detailed step-by-step process included in the agency TechStat guide. That could prove burdensome for people who are already juggling multiple priorities.
TechStat might also foster a certain level of distrust among project managers.
“If it’s seen as punitive by people carrying out IT projects, as opposed to a collaborative way to fix things, that could create perverse incentives, such as people who doctor data or people who aren’t entirely forthcoming,” Rushing said. “I don’t think it should be implemented as a ‘gotcha’ sort of thing.”
Agencies also need to think about what they are measuring from a broader perspective. A program could be a great success in terms of cost, schedule and functionality and yet still fail to meet a legitimate business goal.
That’s one of the dangers with TechStat, said Mark Forman, co-founder of Government Transaction Services and former administrator of e-government and IT at OMB. He believes TechStat reviews concentrate too much on IT and not enough on business benefits.
“Merely trying to manage the IT diverts you from the real issue of modernizing the business process,” Forman said, adding that the IT projects with the biggest problems are business transformation projects.
He also said relying too much on metrics might allow some programs to pass through oversight channels unscathed as long as they hit the right marks.
Furthermore, McVay said he doesn’t think the data on the IT Dashboard provides a clear enough picture for agencies to make the type of decisions required in a TechStat session.
An idea with legs?
The administration’s IT management reform plan, released in December 2010, has gotten a lot of attention for its ambitious six-month benchmarks.
Kundra assigned a 12- to 18-month deadline for agencies to adopt TechStat. Judging by examples such as Interior, he is making significant headway toward meeting that goal. But Kundra, who is TechStat’s leading proponent, won’t occupy the federal CIO position forever.
So how do OMB officials and agencies maintain the momentum or at least the strategy beyond Kundra’s tenure and the 18-month timeframe of the reform plan?
Young said TechStat’s sustainability depends on agencies’ executive leaders strengthening trust, using carrots instead of sticks and making failure acceptable on some level.
TechStat can’t just be a process led by political appointees, he added. It must also include career civil servants because their participation will be the determining factor in whether TechStat is an enduring initiative or something that ends with the Obama administration.
It’s also important for managers to encourage employees to be honest in the absence of data or the presence of bad data.
“Managers must try to make people as comfortable as possible with these reviews,” Brubaker said. “Once employees get used to these reviews as part of the normal management cadence and expectation, there will be less dread.”
Carper’s recently introduced legislation would formalize a TechStat-like review process for poorly performing IT projects — a move that some experts support. McVay, for instance, said he thinks there would be tremendous value in making TechStat sessions a legal requirement.
In the meantime, agencies must figure out how best to apply the TechStat strategy to their IT projects, work out the kinks and plan accordingly for the future.
Young predicts that in less than a year after implementing TechStat, agencies will be able to see marginal improvements in their IT portfolios.
“The expectation to realize results with a TechStat approach can be attained, but it’s going to take time,” he said. “It holds great promise for the future.”
NEXT STORY: HASC: Too Many Generals