Score card sets performance baseline
The Bush administration's fiscal 2003 budget sets a baseline for managing federal agencies' performance
The Bush administration's fiscal 2003 budget sets a baseline for managing federal agencies' performance, and it is the marker against which all future improvements and failures will be measured.
Last year, the Bush administration narrowed governmentwide performance focus to five areas needing special attention at all agencies — strategic workforce management, expanded use of e-government, increased competitive bidding of government services, improved financial performance and linking performance to budgets. The administration outlined explicit criteria for success based on a score card with a three-level rating system from red (the worst) to green (the best).
As expected, the results released with the budget are dismal. Almost 85 percent of the scores for the 26 agencies detailed in the document were red, including all reds in the competitive sourcing area. The National Science Foundation received the only green score, in the area of financial management.
Agencies should not be discouraged, however, because the administration sees this set of scores as a baseline, said Mitchell Daniels Jr., director of the Office of Management and Budget.
"The marks that really matter will be those that record improvement, or lack of it, from this starting point," the budget document states.
For agencies setting strategic goals at the highest levels, this is only the beginning, said William Early, chief financial officer at the General Services Administration. GSA's management team is working with its bureau and office administrators to ensure that specific performance goals are developed throughout the agency based on the government's strategic goals, he said.
Even OMB needs to work on its performance, Daniels said. "In order to do our duty in a new era of accountable government, we have to be better than we are today at evaluating and measuring program performance, and better than we are today at working with departments on their day-to-day management," he said.
Agencies received the most yellow scores — nine of them — in the area of e-government. The inability to develop a sound business case for information technology investments is one of the major obstacles to seeing green scores in the future, said Mark Forman, OMB's associate director for IT and e-government.
He added that almost $10 billion of the federal government's IT investments are still without the business cases required by OMB's Circular A-130, which requires agencies to develop a business case for every IT investment, complete with performance goals and measures, process controls, security features and a plan for integrating new systems with the agency's enterprise architecture. Some of the biggest IT projects in government had unsatisfactory business cases, Forman said.
For certain programs, the budget already reflects OMB's focus on linking funding to performance. Several programs — such as the Commerce Department's Advanced Technology Program, which funds risky research projects — have already been allocated less money because of poor past performance.
These early efforts to link budget requests with program performance will lay the foundation for a new, more easily understood form of accountability for agencies, said Maurice McTigue, distinguished visiting scholar at the Mercatus Center. McTigue came to the center from New Zealand, which started using results-based decision-making in the early 1990s.
For the first time, program managers will be openly competing against one another for resources, using performance goals and data to back up their arguments, and "the competition between public-sector activities is going to be more intense than that between public and private sector," he said.
This means, however, that "the quality of [agencies'] information has to improve dramatically and immediately, because you are going to be the victim or the beneficiary of the information you're producing... and in many instances, agencies are not going to have the systems to allow them to do that," McTigue said.
All of this has raised concerns in Congress. The Democratic members of the House Science Committee analyzed fiscal 2003 research and development programs, questioning the details of the administration's approach.
"Despite assertions that management scores mattered, it appears to us that the management scores had little or no effect on what happened to a particular agency's budget," according to the analysis. "We wonder whether the new and still developing performance metrics will have any greater impact."
But members of Congress should keep in mind that agencies' performance information will also affect them, because citizens will be able to see whether legislators heed agencies' reports on the potential benefit or harm to citizen services when they make their funding decisions, McTigue said.
***
Score card criteria
For each item on the President's Management Agenda, the administration's performance score card set basic standards for success. If agencies meet just one of the score card's negative criteria for a particular item, they receive a red score. Agencies receive a green score for meeting all of the positive criteria. Agencies receive a yellow score for meeting none of the negative but not all of the positive criteria.
In the e-government category, an agency must meet all of the following criteria to achieve a green rating:
n All major system investments must have business cases that meet the requirements of the Office of Management and Budget's Circular A-11.
n On average, all major information technology projects must be within 90 percent of the cost, schedule and performance targets set in their business cases.
n The agency must show progress or participation in at least three of the e-government initiative areas that focus on improving service to citizens, businesses, other levels of government and within the federal government.
NEXT STORY: Security board makes progress