Agencies Get Guidance on the Ethics and Rigor Required for Evidence-Based Decision-Making
The second of four policy documents related to the Evidence Act focuses on standards and practices for evaluation officers.
As federal agencies—by law and administration mandate—move to incorporate data and evidence-based decision-making into their missions, the Office of Management and Budget released a second wave of guidance Wednesday focused on maintaining trust, rigor and ethics in the process.
The Foundations for Evidence-Based Policymaking Act—signed into law in January 2019—has been lauded as a key step in building a government focused on using data to make decisions on everything from staffing and overhead to critical mission delivery. But before agencies can fully implement the goals of the law, OMB must issue direct guidance on how its mandates should be followed.
Rather than issue one bulk policy document or a myriad of documents for agencies to decipher, the administration opted to push guidance in four phases aligned with the four titles of the legislation, according to Diana Epstein, evidence team lead at OMB.
The first piece of guidance was issued in July 2019, calling on agencies to develop learning agendas to set priorities, establish strategic plans to promote evidence-based decision-making and designate key leaders to carry out the agendas and plans—specifically, chief data officers, evaluation officers and statistical officials.
That first guidance document is “a foundational piece that deals with learning agendas and those key personnel and some of the plans that are required,” Epstein said Wednesday at a Government Analytics Breakfast Forum hosted by Johns Hopkins and REI Systems. “There will be guidance forthcoming on open data related to Title II; additional guidance and regs on CIPSE [confidential information protection and statistical efficiency]—the Title III part; and then additional guidance on program evaluations and practices,” which dropped Wednesday afternoon.
This latest guidance gives agencies a set of standards to help evaluation officers determine whether the data being gathered, systems used to analyze that information, and decisions made using that output are in line with the best practices in evidence-based decision-making.
However, all that will go to waste if colleagues within the agency don’t trust the evaluators or their work. The standards also establish a set of principles to focus their work and ensure a level of trust and buy-in from the rest of the agency.
“Evaluators need to practice and embody these standards in their work in order for federal evaluations to have the credibility needed for full acceptance and use,” the memo states.
The guidance establishes five standards for agency evaluation leads to base their work on:
Relevance and Utility: The Evidence Act was meant to serve as a practical law to push government to use data for everyday decision-making. As such, evaluation officers’ work should be focused on meaningful and timely outputs. “Information should be presented in ways that are understandable and that can inform agency activities and actions such as budgeting, program improvement, accountability, management, regulatory action and policy development,” the guidance states.
Rigor: Without trust, none of this work will matter, the document points out. To that end, evaluation officers must be able to demonstrate rigor in their work, including using solid methodologies and statistical models with “appropriate design and methods to answer key questions, while balancing its goals, scale, timeline, feasibility and available resources.” This also extends to the officers themselves, who must be “qualified evaluators with relevant education, skills and experience for the methods undertaken.”
Independence and Objectivity: As with the necessity for rigor in evaluation work, no one will trust an evaluator’s findings if they question the official’s independence. “Federal evaluations must be viewed as objective in order for stakeholders, experts, and the public to accept their findings,” the memo states. “While stakeholders have an important role in identifying evaluation priorities, the implementation of evaluation activities, including how evaluators are selected and operate, should be appropriately insulated from political and other undue influences that may affect their objectivity, impartiality and professional judgement.”
Transparency: While the previous standards establish a baseline for trust in evaluators’ work, transparency is key to ensure stakeholders are aware of these efforts. This requirement applies equally to before and after the work begins. “Decisions about the evaluation's purpose and objectives—including internal versus public use—the range of stakeholders who will have access to details of the work and findings, the design and methods, and the timeline and strategy for releasing findings should be clearly documented before conducting the evaluation,” according to the guidance. “Once evaluations are complete, comprehensive reporting of the findings should be released in a timely manner and provide sufficient detail so that others can review, interpret or replicate/reproduce the work.”
Ethics: All of this will be for naught if the underlying work does not “safeguard the dignity, rights, safety and privacy of participants and other stakeholders and affected entities,” OMB officials wrote. “Evaluations should be equitable, fair, and just, and should take into account cultural and contextual factors that could influence the findings or their use.”
The guidance also provides 10 best practices for evaluation officers to follow to bolster evidence-based work across their agency. Those include:
- Building and maintaining evaluation capacity.
- Effective use of expert consultations.
- Establishing, implementing and widely disseminating an agency evaluation policy.
- Pre-specify evaluation design and methods.
- Engaging key stakeholders meaningfully.
- Strategic dissemination of the plan.
- Taking steps to ensure ethical treatment of participants.
- Fostering and stewarding data management for evaluation.
- Making evaluation data available for secondary uses.
- Establishing and upholding policies and procedures to protect independence and objectivity.
The policy document goes into detail on each of the 10 practices, though the guidance is meant to be flexible to account for different agencies, missions and the march of time.
“While these standards and practices will assist in establishing a more formal structure for federal evaluation, they should not be used to introduce administrative rigidity and complexity, which may detract from innovation in developing and maintaining agencies' evaluation capacity,” the guidance states. “The standards and practices should be implemented with recognition of the distinct circumstances and capacities of each agency.”