Performance comes into focus
Armed with better metrics, agencies have an opportunity to improve under-performing programs.
It would be a stretch to say that officials at the Office of Management and Budget are happy to hear that an agency’s program is not performing as intended. Nonetheless, they are pleased when an agency actually has an accurate gauge of a program’s performance.
According to OMB, that is happening far more now than it did when agencies began using the Program Assessment Rating Tool to evaluate their programs. OMB devised PART to create a consistent approach for evaluations, but the tool’s effectiveness depends on agencies starting with good metrics.
The good news is that agency managers now are choosing more realistic measures to provide a more accurate snapshot of the strengths and weaknesses of their programs, said Robert Shea, OMB’s associate director for management.
That doesn’t mean the results will be better, but they will be realistic. That is the first step to improving programs, Shea said.
OMB is increasingly linking program performance with annual budget reviews. Agencies should be accountable and transparent to taxpayers about the effectiveness of their programs, and the Government Performance and Results Act and the President’s Management Agenda mandate that, Shea said.
OMB plans to release the latest evaluations of agencies’ programs on its ExpectMore.gov site by early September.
OMB began an effort in December to help improve program performance by focusing on whether agencies had clear program goals, plans to achieve the goals, measures by which to hold managers accountable for results and support from senior leadership.
The effort came soon after an executive order, signed by President Bush in November, that is intended to compel agencies to improve program results. Among the provisions, each agency appointed a performance improvement officer to keep it focused on results. The officer participates in a governmentwide Performance Improvement Council to share best practices.
“Programs now have a much better measurement of program success,” Shea said at a recent conference. When OMB launched the performance improvement initiative, more than half of agencies did not have an adequate measure of success, he said. Today that’s about 15 percent.
As the next step, the performance improvement council should tackle how senior leaders in agencies put that data to use when making management decisions, said Shea.
“We get evidence that this is happening, but some surveys show that most program managers don’t consider using performance information to manage,” he said.
“Leadership has to develop the plan and really hold people accountable for it and use whatever incentives they have at their disposal to ensure consistent attention to the same goals,” Shea said.
A different mind-set
Mike Burr, budget team adviser in the Treasury Department’s Office of Technical Assistance, said PART has proven to be a valuable asset, although its success didn’t come right away. The office assists countries that are reforming their banking, financial management and tax administration.
It’s taken about four years to change the culture to think about and measure the right program outcomes, Burr said. His team developed a program performance model and supported it with a project management tracking system that is revised monthly.
“The measures and evaluation processes have to have credibility with management and the people on the ground or it’s not going to work,” he said. All measures should be relevant to the work and be relevant to each other. “It shouldn’t be a mystery. It should be clear,” he said.
Collect data, then what?
Of course, collecting and reporting performance data is only the first part of the process. Using that data to improve programs is more difficult, Shea said.
But the work has paid off at the Housing and Urban Development Department and its HOME program. HOME is the largest federal block grant to state and local governments designed to create affordable housing for low-income households.
HUD uses its data to generate quarterly updates that break down the various levels of performance for HOME grant recipients. In turn, state and local HOME grantees use the quarterly snapshot reports to measure their progress and benchmark their performance and efficiency with other grantees in their states.
HUD assigns rankings and publishes them online. As a result, funds are better managed and the HOME program is able to provide more affordable housing, Shea said.
HUD also provides a dashboard for regional elected officials to use for a quick, graphic overview of the performance of their jurisdictions’ grantees in delivering affordable housing assistance under the HOME program, said Cliff Taffet, director of HUD’s Office of Affordable Housing.
The objectives agencies use to measure results may not always be perfect, so managers need to monitor and update them, he said. For example, federal programs dealing with mortgages and housing missed the signs leading up to the housing bubble bursting, he said.
“While we were getting people into home ownership, somehow the system failed despite the fact that programs received PART scores throughout the government that would indicate that everything was OK,” Taffet said. However, the individuals who HUD assisted through grants to state programs are faring better, however, because they received housing counseling and were aware of potential mortgage payment problems, he said.
The Nuclear Regulatory Commission established a comprehensive performance management program and communicated the use of performance information to its internal and external stakeholders, said Jim Dyer, the NRC’s chief financial officer and performance improvement officer. He cited a strong commitment from the NRC commissioner and senior managers to continuous improvement and a focus on executing it, he told a recent hearing.
The agency uses its overall strategic plan and previous performance assessments to develop an annual budget. As part of the budget process, the NRC identifies targets for quantity, cost, quality and timeliness of program activities in each organization’s operating plans. The measures establish clear expectations for staff performance and are monitored in key areas throughout the year and reported quarterly to various levels of management, Dyer said.
An agency working group recently completed a benchmarking project among NRC offices that collected the best practices within the commission for reporting performance.
“The recommendations from this working group will significantly enhance the quality of performance information monitoring and reporting,” he said.
Concerned about priorities
However, some agency program managers say they are spending more time feeding measurement systems than focusing on strategic decisions to improve their results, according to a recent survey.
Selena Rezvani, an assessment consultant at Management Concepts, conducted the survey for the Council for Excellence in Government. OMB invited program managers to participate, and 123 responded, Rezvani said.
Managers responded that discussions with other managers and independent evaluations provided more assistance to improve their programs than did formal agency assessments and reports from agency inspectors general or from the Government Accountability Office. For example, managers reported that OMB’s PART evaluation was not uniform across similar programs and the final rating process was not transparent to them.
Their most difficult activities were developing measures and assessing program results, responding to OMB special requests and preparing and negotiating budgets, according to the survey. However, the managers said they had little support from executives in developing requirements, risk management, and budget and financial management.
“Managers said the PART is too rigid and too political and needs to be more flexible,” Rezvani said. They want to understand how the process works and for OMB’s evaluations to be more transparent.
Based on the survey, the council is creating a program management community of practice or network by the end of summer or early fall, said Norm Lorentz, the council’s vice president. It will provide coaching by former federal executives and an opportunity to share what works.
NEXT STORY: GAO: VA’s HealtheVet needs plans