Fighting Deepfakes Will Require More Than Technology
When technology is part of the problem, can you count on it to be part of the solution?
Deepfakes—fake but incredibly realistic photos, videos, or audio created with machine learning—are receiving increased attention in the national security arena due to their ability to manipulate or deceive. Pervasive among discussions of countering deepfake technology is how machine learning can be used to detect fraudulent content made with these systems (here, here, here, and here, for instance). Machine learning is an important countermeasure, but the idea that “good” technology is necessarily the best and only needed solution to “bad” technology is fantasy.
To counter the threats posed by this emerging technology, strategies must be broad, nationally coordinated, and entail solutions that are technical, human-focused, and societal-focused writ large. There is perhaps no better demonstration of this idea than the evolution of how government, industry and individuals respond to cyberattacks.
In the early days of the internet, security was left to the individual. If you got hacked, that was largely your problem; make a stronger password and move on. Soon, though, there were enough users and corporations online—both, increasingly dealing with sensitive information—and enough security threats that mitigation of cybersecurity risk became a cottage industry filled with products and services offered by private corporations. Firewalls and antivirus systems were marketed to consumers, IT managers and executives alike in easy-to-download, easy-to-use packages. Install our software, the logic went, and our technical countermeasures will protect you from viruses and other cyber threats. Again, however, this too proved inadequate.
Human beings, as the oft-recited line goes, are frequently the “weakest link” in cybersecurity. Your company, government agency, or university may have spent millions on sophisticated intrusion detection systems, but it only takes one employee clicking on a phishing link to expose your digital assets to a remote attacker. There are perhaps cybersecurity experts who could have identified the need for a broad, societal cyber defense strategy decades ago, but it’s only now, way into the game, that many policymakers are slowly waking up to the realization that fancy software is still not enough.
The lesson here is clear: Leaving everything up to the end user didn’t work. Neither did leaving everything to technology. Indeed, it takes a combination of user education and technical countermeasures to effectively combat the threats—in addition to societal “resilience” mechanisms like cyber “hygiene” education programs, public-private partnerships to share threat intelligence, bootcamps and workshops to educate key players, exercises to simulate crisis events, and strong deterrent measures taken by government leadership.
Despite these lessons from countering cybersecurity threats, the same fallacy—that technology will be enough—is all too prevalent in discussions of countering the potentially harmful effects of deepfake photo, video, and audio. It makes sense in context: Social media giants largely left the identification of fake news up to end users. That changed, suddenly, when end users realized they were being manipulated in order to make money or distort public discourse and manipulate election results. In response, online platforms ramped up their technical efforts, building algorithms to “contextualize” news with other sources on the issue. They changed their rules around fake accounts and disinformation. They hired more staff to deal with the issue. Again, bad press, user dissatisfaction, and government pressure played a role in these changes.
Yet, with the threat of deepfakes on the horizon, policymakers (and social media giants, unwilling to admit how their platforms might inherently propagate fake news) once again talk too much of “good” AI that can fight “bad” AI, of digital systems to fight emerging fake news threats, of building algorithms to identify and stop content before it goes viral.
While important, the technology is only a piece of the puzzle—one that remains scarily incomplete. The United States has no broad, national strategy on how to counter fake news, including deepfake videos. The President of the United States, in defiance of American intelligence agencies and all other evidence before him, ignored and dismissed election interference until it appeared electoral outcomes may not be to his satisfaction in the 2018 midterms (and then, “election hacking” was a useful preemptive claim—although, to be clear, he still hasn’t budged on the 2016 election). The list goes on.
In a report for the Carnegie Endowment for International Peace, Tim Maurer and Erik Brattburg describe ways in which Europe deals with fake news and election interference: educating political candidates and political parties; hosting media-government discussions to encourage precautionary measures on tech platforms; and issuing statements to educate the general public about fake news, among other strategies. These all build societal resilience against fake news.
Schools, colleges and universities should be teaching students about the trustworthiness of online information and how to process it. Journalists, in addition to social media platforms and government entities, should engage with technical countermeasures (e.g., deepfake detection software) in their day-to-day work. The federal government should launch a broad, national awareness campaign about fake news and deepfakes. Government entities should hold dialogues with media entities and platform owners. Similar to what was recommended by Tim Maurer and Erik Brattburg in their paper in regards to fake news and election interference, more attention is needed to explain deepfake technology and its serious implications. Additionally, key United States officials should publicly and explicitly address election interference and take deterrent measures against the use of deepfake technology for malicious ends.
The United States must learn from the trajectory of countering cybersecurity threats: to address the threat posed by deepfake photo, video, and audio, it will take technical defenses, human defenses, and societal defenses writ large.
This is by no means a comprehensive list, and perhaps a national commission should be appointed to identify other solutions that could and should be part of a societal counter to deepfake technology. But that’s exactly the point. If policymakers think we can solely rely on 1s and 0s to fight the effects of 1s and 0s, we’re in for some serious trouble.
Justin Sherman is studying computer science and political science at Duke University. He is a fellow at the Duke Center on Law & Technology and a Cybersecurity Policy Fellow at New America. The views expressed here are his own.