White House to Hold ‘Frank’ Discussion With Top AI CEOs Thursday
Top administration officials, including Vice President Kamala Harris, will attend the meeting.
The White House will host four tech CEOs Thursday to discuss artificial intelligence—specifically, responsible innovation and mitigating risks the technology poses—according to an invitation obtained by Nextgov.
The dialog comes amid an almost fever-pitch of AI-related headlines, including recent warnings from the Federal Trade Commission on AI use, to lawmakers introducing legislation that would keep AI from launching nuclear weapons. Thursday’s meeting follows a request for information from the White House Office of Science and Technology Policy seeking feedback regarding companies’ use of AI to surveil employees and monitor productivity.
“We aim to have a frank discussion of the risks we each see in current and near-term AI development, actions to mitigate those risks, and other ways we can work together to ensure the American people benefit from advances in AI while being protected from its harms,” Arati Prabhakar, director of the White House Office of Science and Technology Policy, said in the invitation.
The invited CEOs are Sam Altman of OpenAI, Dario Amodei of Anthropic, Satya Nadella of Microsoft and Sundar Pichai of Google. Biden administration participants include Vice President Kamala Harris, Commerce Secretary Gina Raimondo, Chief of Staff Jeff Zients, Deputy Chief of Staff Bruce Reed, National Security Advisor Jake Sullivan, Domestic Policy Advisor Susan Rice, National Economic Council Director Lael Brainard, White House Counsel Stuart Delery and Prabhakar.
According to a White House official, the meeting is part of a “broader ongoing effort to engage with advocates, companies, researchers, civil society organizations and communities across the United States on critical AI issues.” The outreach follows two major Biden Administration policy efforts regarding AI. Late last year, OSTP unveiled the AI Bill of Rights, which outlined five principles to be considered when developing AI technologies. The principles—though only recommendations and not legislatively enforced—include creating safe and effective systems, data privacy, algorithm discrimination protections, user notices and human alternatives.
In January, the National Institute of Standards and Technology unveiled its AI Risk Management Framework, which can serve as a guideline for how entities build ethical, trustworthy AI systems.
“President Biden has been clear that in order to seize the opportunities AI presents, we must first mitigate its risks,” the White House official said. “This means both supporting responsible innovation that improves lives and serves the public good, and also ensuring appropriate safeguards to protect the American people, our society, national security and economy. He has also been clear that companies have a fundamental responsibility to make sure their products are safe before they are deployed or made public.”