National Security AI Commission at Odds Over U.S. Response to Malign AI
The National Security Commission on Artificial Intelligence voted on quarter three recommendations in what was likely its last public meeting for 2020
The National Security Commission on Artificial Intelligence met virtually Thursday to review recommendations for its third quarter report in what will likely be its last public meeting for 2020.
Commissioners during the meeting voted on recommendations in six areas from research and development to how to promote collaboration with allies on AI. Commission staff also presented on a special topic at the end of the meeting that divided members over how the U.S. should respond to malign AI and AI-powered information operations.
Commission staff recommended several provisions to “adopt an offensive approach to counter and compete against malign information,” including the creation of a Malign Information Detection and Analysis Center, or MIDAC, staffed by a team of intelligence analysts.
Some commissioners, such as chairman and former Google chief executive officer Eric Schmidt, offered hearty endorsements of these recommendations. But others were far less enthusiastic, and expressed concerns that the commission was out of its depth when it comes to countering AI-enable information operations.
“I think everything you’re recommending we should do, and it’s probably still not enough,” Schmidt said. “In other words, we’ve got to rethink how we’re going to deal with this because with the broad ability to do deepfakes, and fake texts and so forth, it’s only going to get worse.”
Commissioner Eric Horvitz, Microsoft’s chief scientific officer, echoed this point, adding artificial intelligence may be expanding the horizons of psychological operations. He called the problem one of the key issues of our time.
Though there was broad agreement among the commissioners that AI tools that may be benign in some hands can turn into serious threats in the hands of bad actors, and that the problem is likely to get worse over time, some commissioners argued the issue may be too big for them to take on.
Andrew Moore, head of cloud AI at Google, told the group he wasn’t sure if the commission should be jumping in with the assumption that they knew how to attack the problem of malign AI.
“The next bad things that are going to be done are not actually going to be things that we could prepare for,” Moore said.
Commissioner Katharina McFarland, former assistant secretary of Defense for acquisition, recommended the group take a step back. She noted there was no clear definition of malign AI. In order to manage the threat, characteristics of the malign AI need to be mapped out.
In the end, NSCAI voted to move the special topic out of its quarter three report and said they would return to the discussion in the next fiscal quarter. There won’t be a report for the fourth quarter; instead the commissioners will create a final report. That final report is due March 2021, according to the NSCAI website.
The commission provided an interim report to Congress on its quarter three findings and recommendations last month, where it focused on discussing workforce initiatives to build up a reservoir of U.S. artificial intelligence talent.
NEXT STORY: DOD Unveils First Enterprisewide Data Strategy