Lawmakers, Experts, Industry Highlight Need for Ethics After Defense Commission Releases Final AI Report
Experts lauded the report but said it ethical issues around AI remain a concern.
Michèle Flournoy, former undersecretary of defense for policy, called the National Security Commission on Artificial Intelligence’s final report “probably the most important commission report since the 9/11 Commission” days after commissioners testified on the report before Congress.
“I think they nailed it in terms of analyzing the importance of the United States stepping up to compete in AI both for the commercial, economic applications and what that means for our competitiveness economically around the world, but also because of the potential military applications,” Flournoy said during a Tuesday Center for New American Security webinar.
Other experts, industry representatives and lawmakers also applauded the effort as well as some of the report’s key recommendations, which were approved March 1. Commissioners testified on the report before Congress March 12. But critical questions around legal limits on government use of AI were left unanswered, according to a privacy expert who has followed the commission’s work since it was created.
Two industry experts—Booz Allen’s Steve Escaravage and Deloitte’s Ed Van Buren—emphasized the importance of the report in email statements shared with Nextgov.
“While the United States has enjoyed technology leadership in other areas, we’ll quickly fall behind if action is not taken to heed the call plainly made in [the] final report from the National Security Commission on Artificial Intelligence,” Steve Escaravage, who leads Booz Allen’s analytics practice and AI services business, said. “I was most struck by the report’s sobering call that our military may lose ‘military-technical superiority’ in a relatively short time if we do not alter course.”
Van Buren, who is executive director of Deloitte’s newly established Deloitte Artificial Intelligence Institute for Government, said it is critical to commit resources and talent to AI.
“The recommendations in this report, including the establishment of a National Technology Foundation, serve as guiding light for how to propel US innovation around leading-edge technologies,” Van Buren said.
Experts also made sure to voice support for strong ethical AI standards: Flournoy said the U.S. and its allies must ensure ethical principles and international standards are established, while Escaravage called for an “intense focus on an ethical approach."
John Davisson, senior counsel with the Electronic Privacy Information Center, which successfully sued NSCAI to enforce compliance with Freedom of Information Act and Federal Advisory Committee Act transparency obligations, highlighted several of the commission’s recommendations as important safeguards in this area. In an interview with Nextgov, Davisson said the recommendation to mandate AI risk assessments, similar to privacy impact assessments, is one of the most important reforms in the report.
Davisson also agreed with the call to update privacy impact assessment standards because of what he called the “wildly divergent” compliance with requirements currently displayed across federal agencies as well as the recommendation to strengthen the Privacy and Civil Liberties Oversight Board.
But Davission called the way the Commission handled the issues of limits on government use of AI and human rights disappointing. Rep. James Langevin, D-R.I., chairman of the House Armed Services Subcommittee on cyber, innovative technologies, and information systems, highlighted ethics during his opening statement at Friday’s hearing.
“Above all, the commission has crucial recommendations related to building and deploying AI in an ethical manner that is respectful of human rights,” Langevin said at the hearing, which was held jointly with House Oversight and Reform’s national security subcommittee. “Indeed, that last category is what sets our nation apart.”
But on a commission heavily featuring industry voices from companies like Oracle, Microsoft, Google, and Amazon Web Services, Davisson said more human rights voices and representatives of civil society should have been included. While private industry deserves a voice, NSCAI’s report goes beyond aligning Defense Department needs with the defense industrial base, he said.
“It's an issue of regulating AI use that will affect everyone, and so it's important for there to be representatives of the public arguing for strong limitations on AI use on a body like that,” Davisson said.
It’s not that industry and technologists don’t support the need for transparency, fairness, reliability and accuracy, Davisson said, it’s just that there is a lot of disagreement about what those terms mean.
NSCAI deferred taking a stance on a critical issue for establishing a framework for how the government uses AI as well, Davisson said: Rather than recommending legal limits on government use of AI, the commission called for the president or Congress to mandate the establishment of a separate task force to ensure “AI and associated data in U.S. government operations comport with U.S. law and values,” according to the report.
The commission recommends the task force conduct an assessment on the privacy, civil rights, and civil liberties implications of AI and emerging technologies as well as recommend legislation and regulations for areas including when the government should publish AI risk and privacy assessments, standards for federal government’s use of biometric technologies, and government procurements of AI products, among others.
“We certainly argue and have argued many times that binding restrictions on AI use are critical and that any AI framework that doesn't include them really isn't worth the paper it's written on,” Davisson said. “So it's in one sense good that they have recommended this task force in another sense, this was really part of the AI Commission's mandate. In many respects, they're just sort of taking it down to the next advisory body, which was disappointing.”