USAID wants help in crafting a playbook for global AI uses
The federal government’s latest artificial intelligence guidance is looking to take recommendations from earlier risk management documents.
A cohort of U.S. government agencies are planning to release a playbook to help guide the use of artificial intelligence based on best practices related to the ethical, socio-technical approach touted by the Biden administration.
Detailed in a request for information published in the Federal Register, officials in the the U.S. Agency for International Development, in conjunction with the State Department and the National Institute of Standards and Technology, are working on publishing an AI in Global Development Playbook. This playbook is intended to adopt elements from NIST’s AI Risk Management Framework, which hinges on a secure-by-design approach to AI and machine learning systems.
The AI in Global Development Framework, pursuant to President Joe Biden’s landmark executive order on AI, is specifically meant to apply the AI Risk Management Framework’s principles to the larger international community.
“As part of this work, the Secretary of State and the Administrator of the United States Agency for International Development shall draw on lessons learned from programmatic uses of AI in global development,” the order reads.
The playbook will specifically set out to provide guidance for organizations and governments that intend on building or deploying AI software in their operations. The RFI acknowledges that while these environments are diverse, there are still larger characteristics that can be distilled into agnostic guidance.
“Addressing the risks presented by AI technologies is essential to fully harnessing their benefits,” the RFI reads. “Understanding these risks across a range of geographic and cultural contexts requires the expertise of local communities, the private sector, civil society, governments, and other stakeholders.”
Responses for the RFI are open until March 1, 2024. Officials are looking for research, empirical data, studies and other reference materials relevant to writing the new playbook. Materials for inclusion can focus on subject areas like opportunities, risks and barriers to AI usage; enabling an ecosystem for responsible AI softwares; and policies that could help protect humans from potential AI risks.
NEXT STORY: DISA plots 'concierge AI' to help staffers