How AI will help defeat AI-powered identity fraud
COMMENTARY | Government agencies tasked with combating public benefits scams have been slow to adopt AI as a fraud prevention method.
Fraudsters can now fabricate or steal a digital identity for less than the cost of delivering take-out to your house.
Realistic deep fakes can be made for free, in seconds, and combined with stolen personal identifiable information bought on the dark web for as little as $10. These fakes are used to commit digital identity fraud and trick individuals, organizations, and governments into relinquishing millions of dollars, with virtually no hope of recuperation.
In the last year, GenAI has exploded on the scene as a convenient and low cost tool to commit fraud. It is being used to create fake drivers’ licenses, clone voices using just 10 seconds of audio, automate the stitching together of synthetic identities and scale email phishing scams.
Yet, those tasked with combating fraud have been slower to adopt this powerful technology.
The reality is we are in a race and with each passing moment fraudsters are getting further ahead. Now, we are at a crossroad. Do we continue to fall behind or do we embrace the pace of change and adopt AI to go faster?
I liken the current moment to a defining time in the history of transportation. The very first cars were unreliable and too expensive. As they evolved, cars passed horses as the most reliable and convenient form of transportation, changing the world as we know it.
Fast forward to the present day. AI has quickly advanced and ushered in a new era of fraud. Failure to leverage AI to fight back will leave us in the dust of fraudsters, like a Clydesdale chasing a Ferrari.
In the public sector, government agencies have experienced a barrage of AI-driven fraud. This is because they are responsible for distributing billions of dollars and they often rely on outdated technology to do so.
Now more than ever, we must employ the right technologies to protect people and safeguard the critical services that government provides.
The public sector has taken significant steps to use AI to combat fraud. For example, the Treasury Department announced the recovery of over $375 million through a new AI-enhanced fraud detection process. But we need to do more, and quickly.
There are three important steps policymakers and government agency leaders must take to fast-track the use of AI to fight fraud.
Policymakers must encourage agencies to adopt AI technologies that can better protect the American public. The White House Office of Management and Budget recently put a stake in the ground for the use and governing of AI technologies. This is a laudable step to innovate the delivery of government services, but must be met with a sense of urgency to achieve alignment in execution. Otherwise, we may see a chilling effect on agency AI adoption.
Lawmakers also have a role to play. Congress should move to fully fund the National Institute of Standards and Technology as it launches the U.S. AI Safety Institute and develops standards and measurement frameworks for AI in government. Knowing what is working and what is not will empower us to be as nimble as our opponents.
We also need to prioritize a robust digital identity verification system equipped with AI as part of our defenses. In this new era of fraud, basic identification checks aren’t enough. We need a complete picture of who someone claims to be online. New approaches that leverage AI and machine-learning analyze additional data points like operating system, IP geolocation, network type, and online behavior to verify digital identity.
The reality is that the fraud landscape we see today will not match what we have to combat tomorrow. The pace is evolving too quickly for legacy technologies to keep up.
AI can help us win the race against AI-enabled fraud. But only if we choose to use it.