FTC Issues Warning About Generative AI Misuse
The agency noted that firms should not harm or deceive consumers with the new tools or get rid of staff in charge of artificial intelligence ethics.
With the growing concerns about artificial intelligence and generative AI, the Federal Trade Commission urged companies building or deploying new AI tools to retain their personnel related to ethics and responsibility for AI and engineering, according to a blog post on Monday.
At a time when many companies are adopting generative AI tools, they could ultimately deploy harmful technologies without these personnel. The FTC warned that if companies removed or fired such workers and if the agency “comes calling and you want to convince us that you adequately assessed risks and mitigated harms, these reductions might not be a good look.”
The Washington Post reported in March that major companies like Microsoft, Twitch and Twitter have laid off their AI ethics personnel.
According to the blog post, the agency is focusing on the use of AI and generative AI by firms and the impact it can have on consumers. Of particular concern for the FTC is the use of AI or generative AI tools to better persuade people and change their behavior. The FTC noted it previously focused on AI-deception, such as making exaggerated or unsubstantiated claims and using generative AI for fraud, as well as AI tools that can be biased or discriminatory.
The FTC noted that businesses are starting to use generative AI tools in ways that can influence the beliefs, emotions and behaviors of people, such as a chatbot that provides information, advice, support and companionship. According to the FTC, “many of these chatbots are effectively built to persuade and are designed to answer queries in confident language even when those answers are fictional.” The agency stated that people may be more likely to trust machines believing they are impartial or neutral, which is not true because of the bias in their creation.
A primary concern for the agency is companies using generative AI to steer people unfairly or deceptively into harmful decisions related to, for example, finances, health, education, housing and employment. The FTC added that such harmful uses could be deliberate or not, but the concern is the same.
For example, the FTC warned that companies using generative AI to customize ads “should know that design elements that trick people into making harmful choices are a common element in FTC cases, such as recent actions relating to financial offers, in-game purchases and attempts to cancel services. Manipulation can be a deceptive or unfair practice when it causes people to take actions contrary to their intended goals.” However, the FTC added that companies placing ads in generative AI results could also be deceptive, and it must be clear what is an ad and what is a search result.
The agency provided some guidance for companies making use of generative AI: risk assessments and mitigations should factor in likely downstream uses; staff and contractors need to be trained and monitored; and companies should address the use and impact of deployed tools.
The FTC also warned consumers: “for people interacting with a chatbot or other AI-generated content, mind Prince’s warning from 1999: ‘It’s cool to use the computer. Don’t let the computer use you.’”