What to Watch for When America’s Tech Giants Testify on Russian Hacking Today
Until now, the companies have been tight-lipped about political manipulation on their platforms.
This week Twitter, Facebook, and Google will testify publicly before the US Congress about how the Russian government manipulated public opinion during the 2016 US election. Today (Oct. 31) they will be grilled on terrorism and crime by a Senate Judiciary subcommittee, and tomorrow (Nov. 1) they will testify in front of the Senate and House Intelligence Committees.
Until now, the companies have been tight-lipped about political manipulation on their platforms—but it’s all about to come out. As an idea of what is to come, ahead of the hearings today it was revealed that over 126 million Facebook users in the US were exposed to Russian-created political content.
In the past, both companies have tried to focus on the ways Russia used particular mechanisms to manipulate public opinion, such as paid advertisements. But they haven’t clearly shown how these efforts are linked to other issues, such as automated political bots, government-built event pages on Facebook, or manufactured trends on Twitter. They also have not revealed whether Russia was the only government launching influence campaigns during the election.
Why should we care? People need to know the truth about political manipulation over social media so they can accurately evaluate whether these platforms warrant trust as news sources. Failing to provide transparent reports on these issues is almost tantamount to treating computational propaganda as a permissible mode of political communication. And that is something to be fearful of.
Here are the key elements missing from their story you should be looking for this week.
The backstory
Several weeks ago, Facebook and Twitter revealed that Kremlin-linked groups bought and placed advertisements on the social media platforms during the 2016 presidential contest. In an apparent attempt to push voters away from Hillary Clinton, many of the ads supported Donald Trump, Jill Stein, or Bernie Sanders. These ads were used to target specific subpopulations in the US, including racial minorities.
Twitter has said it deleted over 200 accounts it believed to be related to thousands of Russian government ads, but previous research suggests that it is likely that there are thousands of other covert accounts being operated by Russia’s “Internet Research Agency.” According to several teams of university researchers, the reported accounts are negligible when compared with the deluge of computational propaganda that is being spread on these platforms.
However, social media ads are only one part of the problem.
Both platforms, but especially Twitter, have a problematic history in dealing with politically oriented social-bot accounts. Neither site has been particularly forthcoming about the extent to which countries, such as Russia, have used this technology to push their narratives, nor have they openly discussed a myriad of additional ways in which their platforms can be used to influence public opinion.
For example, evidence suggests that Russia used Facebook’s event and group pages to provoke both online and offline political altercations. Similarly, Twitter’s private-messaging function was used to coordinatesemi-organic, non-automated gaming of political conversations to drive up visibility of far-right trends and searches.
The companies’ efforts to obfuscate these interventions have been met with frustration by politicians and third-party experts. Senator Mark Warner called Twitter’s private attempts to share intel with Congress “deeply disappointing” and “inadequate on almost every level.” Zeynep Tufekci, an associate professor of information science at the University of North Carolina, levied similar dissatisfaction at Facebook founder Mark Zuckerberg’s recent public defense of his company. She said that “the unfortunate truth is that by design…Facebook has made it easy for it to be weaponized to spread misinformation and fraudulent content.”
And it’s not just Russia doing the spreading—and not just during the 2016 US election. Russian efforts represent only a fraction of a gargantuan problem. Online manipulation has occurred during a number of critical elections and security crises around the world over the last five years. For example, research has exposed orchestrated bot campaigns in the lead up to the Brexit referendum and the French general election, among numerous other influence operations detailed by our collaborators at the Oxford Internet Institute. Various digital influence operations during the 2016 presidential election have also been linked to political groups inside the US, including efforts to disenfranchise voters and spread misinformation to the opposition.
What they’re not telling us
Social media companies maintain that they cannot reveal their methods for detecting and removing computational propaganda because it could compromise their platform. This power move makes a big ask for the public to trust these social platforms—but public trust is wearing thin.
In the past, it has often taken attentive journalists to expose online manipulation. Without this reporting, public users may have never known that they were being manipulated. And when these companies have released statements about bots, the public is not told how they detected these accounts, and why they removed them. For example, Twitter’s latest attempts to dismiss external research, coupled with their general lack of transparency, is an attempt to retain ownership over information that potentially influenced the US election. Theirrecent response to the complexity of their bot problem suggests either a lack of openness, understanding—or both.
Furthermore, the companies to date have only really discussed the method of spreading disinformation or political harassment from person to person—say from government-paid troll to citizen user—or from computer to person—say from software-driven political bot to citizen user. But a potentially more worrying and less understood kind of manipulative communication is also occurring on the sites: computers to computer.
This kind of influence occurs on the backend of the sites, working to manipulate the trends that Twitter prioritizes on users’ pages and the content that Facebook ranks in its Newsfeed. These tactics, which result in distorted or harmful information being delivered through the “unbiased” guise of platform recommendation, are perhaps even more damning for the companies because this information is actively presented as important to users by the companies themselves.
Blaming the algorithms
When social media companies have been challenged about this type of infrastructural subversion, their first response has been to place responsibility on the algorithms themselves. When false information about the Las Vegas shooter’s identity from online message board 4chan made its way onto Google’s Top Stories and Facebook’s Crisis Response page, both companies responded by blaming their algorithms. “We are working to fix the issue that allowed this to happen,” a Facebook representative told CNN.
Krishna Bharat, one of the engineers who helped create Google News, recently called the product “shameful and irresponsible.” Bharat argued that it is “time to go back to the tried and tested practice of showing vetted sources.” Google News, he maintains, “doesn’t feature 4chan for a reason.”
These companies are implying that their “errors” were made by a machine. But while algorithms are complex and can behave in unpredictable ways, they are nevertheless designed and made by people; they are the secret sauce that the companies use to make their product special and to generate millions upon millions of dollars a month.
Twitter has also offloaded ethical responsibility onto their algorithms and platforms. Its memo about bot regulation states that “Twitter’s open and real-time nature is a powerful antidote to spreading all types of false information.” But exactly the opposite is the case: Twitter’s openness and its real-time features like the trending algorithm create structural vulnerabilities that allow manipulative automation to thrive. And if Twitter is going to claim that its “real-time” interface protects against the spread of misinformation, where is the proof?
This question is exactly what we should keep in mind during Twitter, Facebook, and Google’s testimony.
What to look out for
During the testimony, we need to ask ourselves how much useful and factual information is actually shared by these companies—and when information is not shared, we need to ask ourselves why.
Twitter, Facebook, and Google have worked to eliminate their responsibility for media manipulation: In a June 2017 statement, Twitter wrote “We, as a company, should not be the arbiter of truth.” Ironically, dismissing scientific research and withholding vital security information does precisely that: arbitrates truth based on what the company does and does not want outsiders to know.
Social-media companies are failing to demonstrate a strong commitment to the democratic principles that made them possible. While the instability of global cyberpolitics threatens to increase, lack of cooperation from major social-media platforms greatly adds to the turbulence. While these companies profit off marketing their platforms as tools for free speech and global community, behind the scenes they are obscuring connectivity and concealing knowledge about our political system and its vulnerabilities.
The public should watch with open eyes as these companies frame their responsibility for—and potential ownership over—the public impact of political algorithms.
NEXT STORY: The Networking Faux Pas Almost Everyone Makes