Major tech companies pledge efforts to monitor AI-based content ahead of 2024 election

U.S. Sen. Mark Warner (D-VA) leaves the U.S. Capitol on July 11, 2024 in Washington, DC. Warner received commitments of AI monitoring practices from several major tech companies ahead of the 2024 electon.

U.S. Sen. Mark Warner (D-VA) leaves the U.S. Capitol on July 11, 2024 in Washington, DC. Warner received commitments of AI monitoring practices from several major tech companies ahead of the 2024 electon. Tierney L. Cross/Getty Images

Senate Intel Chairman Mark Warner, D-Va., received responses from 19 tech companies pledging actions like content moderation, airtight watermarking and strict licensing practices.

Leading tech companies have pledged to implement various practices to protect against the influence of artificial intelligence-generated content ahead of election season.

A total of nineteen leading tech firms sent response letters to a call for replies Sen. Mark Warner, D-Va., issued back in May, where companies including X, Google, Anthropic, Meta, Microsoft and McAfee provided details about their internal commitments to monitoring their online platforms for AI-augmented content related to the forthcoming 2024 presidential elections. That’s out of the 24 total companies Warner sent letters to as signatories of the AI Elections Accord established in February at the Munich Tech Conference. 

“I appreciate the thoughtful engagement from the signatories of the Munich Tech Accord,” Warner said in a press release. “Their responses indicated promising avenues for collaboration, information-sharing, and standards development, but also illuminated areas for significant improvement.”

The content of each company’s letter varied. Leadership from social media site X, formerly Twitter, said that its internal Safety Teams are continuing to monitor the validity of content published on its platform.  

“In times of elections and at all times, we believe that it is critical to maintain the authenticity of the conversation on X,” Wifredo Fernández, X’s head of U.S. and Canada global government affairs, wrote. “Our Safety teams remain alert to any attempt to manipulate the platform by bad actors and networks. We have a robust policy in place to prevent platform spam and manipulation, and we routinely take down accounts engaged in this type of behavior.”

Meta, the parent company of platforms like Facebook and Instagram, referenced the company’s new approaches to identifying and labeling AI generated content, as well as requiring third-party advertisers to disclose when they leverage AI technology in their image generation. 

“We remain focused on providing people reliable election information while combating misinformation across languages,” the letter, signed by Meta’s Vice President of North America Policy Kevin Martin wrote. “We know this work is bigger than any one company and will require a huge effort across industry, government, and civil society. We will continue to work collaboratively with others to develop common standards and guardrails.”

Martin also noted that the company is employing strict licensing regimes for its proprietary Llama 2 and 3 large language AI models, notably requiring that Meta retain auditing authorities. 

Michael Beckerman, the vice president and head of public policy of Americas at TikTok, also sent a letter noting that his company does not allow paid political ads on the video platform, and that TikTok focuses on actively removing “harmful” content, including AI-generated material. The platform will also label AI-generated content which contains specific, realistic imagery.

Companies building AI and large language models, like Anthropic and OpenAI, also issued letters to Warner. Anthropic Chief Executive Officer Dario Amodei said that the company both warns end users about misusing its AI models, such as the generative interface Claude, but is taking stronger steps to route user inputs to factual information.

“In the United States, we are implementing an approach where we use our classifier and rules engine to identify election-related queries and redirect users to accurate, up-to-date authoritative voting information,” Amodei wrote. “While generative AI systems have a broad range of positive uses, our own research has shown that they can still be prone to hallucinations, where they produce incorrect information in response to some prompts.”

Given that Anthropic’s models are not trained in a timely manner to validate every election-related question, the company is guiding users away from entering queries where errors or hallucinations would be “unacceptable” and toward official websites. 

Similarly, OpenAI is redirecting users to “authoritative sources of information” when a potentially sensitive search query is detected, and also said it is developing new tools within its AI model offerings to effectively label AI-generated content.

“OpenAI is also developing a detection image classifier — a tool that uses artificial intelligence to assess the likelihood that an image was created using DALLE 3,” Anna Makanju, the vice president of global affairs at OpenAI, wrote, along with tamper-resistant watermarking. 

Despite the detail offered by the responding entities, Warner said that he was “disappointed” in the lack of information the letters offered on specific company reporting structures to counter digital impersonation of election and political officials. 

“Lastly — and perhaps most relevant ahead of the 2024 Presidential Election — I am deeply concerned by the lack of robust and standardized information-sharing mechanisms within the ecosystem,” Warner said in the press release. “With the election less than 100 days away, we must prioritize real action and robust communication to systematically catalogue harmful AI-generated content.”

A common issue mentioned by responding companies was that the AI landscape is rapidly changing and innovating, making rapid responses to fraudulent content, online impersonation and other misinformation, particularly challenging. 

“The history of AI development has been characterized by rapid advancements and novel applications,” Amodei wrote on behalf of Anthropic. “We expect that 2024 will bring forth new uses of AI systems, which is why we are proactively building methods to identify and monitor novel uses of our systems as they emerge. We will communicate openly and frankly about what we discover.”

As the U.S.’s 2024 Presidential election looms, multiple experts, including the Cybersecurity and Infrastructure Security Agency, have cited synthetic AI-generated content as a notable threat to election security operations. 

Speaking before a Senate hearing in May, CISA Director Jen Easterly said that, against a complex threat landscape, her agency is partnering with private tech companies to establish misinformation mitigation tactics. 

In Las Vegas, Easterly told a group of reporters at the Black Hat cybersecurity conference on Wednesday that she has not yet viewed the responses from the companies, but that a meeting with them is expected in the coming weeks.

Nextgov/FCW reporter David DiMolfetta contributed to this report.