White House leads public-private commitment to curb AI-based sexually abusive material

J. David Ake/Getty Images

Leading private sector companies signed voluntary agreements with the White House to train and monitor their AI models to avoid such misuse.

The White House and members of the private sector have come together again to secure a new set of voluntary commitments to halt the proliferation of sexually abusive content aided by artificial intelligence.

Announced on Thursday, Adobe, Anthropic, Cohere, Common Crawl, Microsoft, and OpenAI signed various commitments promising to prevent the usage of their generative AI systems in creating sexually abusive content. This includes committing to responsibly sourcing datasets to train models, incorporating feedback loops and stress-testing to prevent AI systems from learning sexually abusive prompts, and removing explicit content from AI training datasets. 

“Today’s commitments represent a step forward across industry to reduce the risk that AI tools will generate abusive images,” the release said. “They are part of a broader ecosystem of private sector, academic, and civil society organizations’ efforts to identify and reduce the harms of non-consensual intimate images and child sexual abuse material.”

The Biden administration has positioned itself to actively work with private sector leaders to help prevent the widespread misuse of generative AI systems, debuting a preceding series of voluntary commitments in the summer of 2023 to promote the “Safe by Design” AI posture the administration is championing.

Leadership at the White House’s Office of Science and Technology Policy previously called for action to prevent the use of advanced AI systems to create sexually abusive material, as per President Joe Biden’s October 2023 executive order on AI and the 2022 Violence Against Women Act Reauthorization.

Payment companies have also agreed to join the fight to stop sexually abusive synthetic content. Cash App and Square both agreed to monitor and curb payments related to producing or publishing image-based sexual abuse, as well as expand participation in initiatives to detect sextortion schemes. 

Google additionally agreed to begin adjusting its search engine results to combat non-consensual images, and Microsoft, GitHub and Meta have all made individual commitments to remove content that contains sexually abusive material, as well as strengthen their internal reporting systems. 

Civil society groups, which signed commitments to help public and private sector entities monitor the results of such efforts, include the Center for Democracy and Technology, the Cyber Civil Rights Initiative and the National Network to End Domestic Violence.

“Through a multi-stakeholder working group, they will continue to identify interventions to prevent and mitigate the harms caused by the creation, spread, and monetization of image-based sexual abuse,” the White House said.