NTIA Asks Public How To Measure Safe AI Systems
The National Telecommunications and Information Administration is seeking answers on how to deepen its sociotechnical approach to deploying trustworthy AI/ML systems in its operations.
The federal agency tasked with shaping telecommunication policies and technology deployment is looking for feedback on its use of artificial intelligence systems to ensure the safe and ethical usage of machine learning technologies.
Outlined in a request for comment, the National Telecommunications and Information Administration, an office housed within the U.S. Department of Commerce, is seeking public comment on using AI systems in business operations while mitigating any harms.
The NTIA’s request contributes to the growing federal push to measure and ensure the trustworthiness of AI systems utilized in both the public and private sectors. NTIA is particularly looking to develop effective auditing measures to gauge how safely a given AI system is functioning with the help of public feedback.
“Efforts to advance trustworthy AI are core to the work of the Department of Commerce,” the notice reads. “NTIA’s statutory authority, its role in advancing sound Internet, privacy, and digital equity policies, and its experience leading stakeholder engagement processes align with advancing sound policies for trustworthy AI generally and AI accountability policies in particular.”
Referencing the recently-released guidance from the National Institutes of Standards and Technology, NTIA officials aim to receive feedback from individuals in the academic, technical, business, and legal sectors. A summary of the broad AI landscape, along with any gaps and barriers to designing and deploying trustworthy AI, are two of the observations NTIA officials wish to see in any submitted comments.
The request leans heavily into focusing on the sociotechnical approach favored by federal regulators surrounding AI and ML best practices. Specific questions ask commenters about the purpose of AI accountability mechanisms, namely audits and certifications; the goals a trustworthy AI system should strive to achieve; and the risk of systemic bias when handling sensitive human data, among several others.
“Our initiative will help build an ecosystem of AI audits, assessments and other mechanisms to help assure businesses and the public that AI systems can be trusted,” said NTIA Assistant Secretary of Communications Alan Davidson. “This, in turn, will feed in to the broader Commerce Department and Biden administration work on AI.”
Davidson expressed optimism at the prospect of incorporating AI into NTIA systems and working efforts, but noted that guardrails to ensure accountability with advanced AI technologies are the best bet against inadvertent, harmful outcomes.
“Good guardrails, implemented carefully, can actually promote innovation,” he said.
NEXT STORY: FDA Wants to use AI to Assess Pharmaceuticals