Silicon Valley’s Internet of the Future is Just as Censored as China’s Firewall
Will the internet’s town square be like Tiananmen Square?
The internet is often known as our “digital public square.” But like any shared space, it is never neutral: It must have politics. As we stand at a crossroads of contention around internet governance, these politics are being called into question and reshaped.
Will the internet’s town square be like Tiananmen Square, locked down behind barriers, checkpoints, and under heavy surveillance, where any hint of public dissent gets swept away? Will the internet be more like Times Square, a mashup of private and public spaces competing for your attention with advertisements, technically open but with heavily armed police officers and surveillance cameras? Or maybe it will be more like the wide-open space of Washington DC’s National Mall, negotiated and controlled by government and private groups with competing political agendas.
We’ve long known that the public square of the internet is more than just virtual space: It’s where we go to hear the town crier tell us what the politicians won’t, where we buy our groceries, and where we socialize with other people from our communities. Whoever controls the town square also controls how we experience day-to-day reality—and that power dynamic is shifting.
The politics of the internet’s shared spaces are increasingly subject to negotiation from states, users, and institutions. When it comes to the dominant players building the infrastructure of our digital spaces, we’re currently living between a highly censored internet (the Chinese model) and an ostensibly open and privatized one (the Silicon Valley model). While many tend to think of the Chinese internet as the boogeyman, tracking its citizens and controlling the level of information they can access, both are proving to be censorious to speech and expression in different ways.
China’s model of the internet combines private platforms, public ownership of the wires, and heavy regulations on speech and data. Years of censorship, state-sponsored messaging and distraction efforts, and the development of a national internet ecosystem have enabled the Chinese government to foster many of the benefits of the internet—economic growth, access to information, increased productivity, and innovation—while maintaining more granular control of what circulates, from words to images.
In contrast, Silicon Valley’s model of the internet is by-and-large painted more positively. Its combination of proprietary and open-source platforms and components has given us many upsides, such as helping foster movements that challenge dictators and giving under-resourced communities access to information and job networks. Under the hood, it’s often a different story: Many technology companies have murky relationships with the NSA, and others recently lobbied to end net neutrality rules.
But it turns out this view of the open internet has a range of challenges that can be just as censorious to speech. The same power that has enabled marginalized and oppressed voices to organize, spread information, and drive attention to their issues is the same megaphone that enables government propagandists and hate groups to do the same. This has had devastating consequences that range from public health crises to decreased trust in journalism to facilitating ethnic cleansing. In this way, the so-called open internet is more like the open ocean: exciting and expansive until we remember the endless dangers that lie below its glistening surface.
So what does the future of the internet hold? There’s no denying that something is broken—but how do we fix it? What would an internet that truly supports democracies look like?
Censorship is a complex issue. Protecting the right to free expression implies protecting expression of beliefs that we may find distasteful or morally offensive. At the same time, an internet rife with hate, misinformation, violence, and propaganda is itself a censorious one, limiting speech by vulnerable and targeted groups through intimidation and confusion. So-called neutral platforms must reckon with the considerable resources and motivations of state-sponsored actors, hate groups, and other agents of disinformation to take over these spaces.
In the open ocean of the internet, many groups will seek safe harbor in private chat apps and smaller networks, not to mention blockchain-based and federated networks. Free from the above-mentioned dangers of the open internet, users will form smaller communities that don’t interact with the larger whole or engage in the shared discourse that democracies need.
At the international level, we are seeing a steady fracturing of the once-imagined global village into global factions, each with their own idea of what speech on the internet should look like. If things continue apace, our global internet will continue to balkanize, with each country developing its own technical and legal infrastructure to enforce norms on speech and information. We are already seeing regions like the UK and Papua New Guineapropose alternative national social media platforms, and the Chinese government’s systems of control are now expanding globally, from fiber optic cables to software platforms. While regional efforts like the GDPR and Brazil’s Data Protection Bill of Law reflect legal infrastructures designed to enforce local norms, some regulations under consideration in Europe may apply worldwide on platforms like Google. As Stanford lawyer Daphne Keller has observed, this could set a precedent for nations to enforce platform content based on local norms.
But there is another future: Moments of crisis are also moments of opportunity to create new standards. Instead of fracturing, the world might once again find a way to work together to develop a common set of norms. After the global devastation of World War II, the newly formed United Nations made systematic efforts to develop international standards around human rights and free expression. This wasn’t easy—it took years of negotiation and consensus building, nor did it end human rights abuses. But it allowed an international community to have concrete conversations about violations and discuss possible modes of recourse.
We already have models for global accountability and standards bodies that might help us toward this goal. The Global Network Initiative is a multi-stakeholder organization that includes platforms in key human-rights issues, and the Internet Governance Forum provides a venue for multi-stakeholder discussion on digital rights issues. Press councils the world over negotiate both international norms and regional frameworks to protect media freedom and advance media standards. Stanford fellow Sean McDonald has argued for civic trusts as a model for governance that builds off of existing legal frameworks. At an initiative I co-founded, the Credibility Coalition, we are working to develop standards for assessing content credibility on the web. This includes working with the W3C Credible Web Community Group (which I co-chair), the Trust Project, and Reporters Without Borders’ Journalism Trust Initiative.
Private companies will adapt, too. Based off pressure from their users, platforms such as Facebook and Google may also create their own new systems of rights and responsibilities. This needs to be an international effort, though, not just one based in Silicon Valley.
To ground those standards in existing international human rights law, a recent report to the United Nations Human Rights Council highlighted the need to involve international social media organizations such as Russia’s VKontakte and China’s Tencent in developing content moderation standards. In response to pressure from staff and the public, Google has released principles for developing AIs that aim to protect human rights (though at present they fall short in key ways). In a recent Tweet thread, UN special rapporteur on freedom of opinion and expression David Kaye outlined the potential and Twitter CEO Jack Dorsey gave it a boost. Facebook CEO Mark Zuckerberg has also discussed the possibility of new governance structures for Facebook, which might include systems of transparency and an independent appeals process. How exactly platforms follow through remains an open question, but the pressures continue from many different avenues.
lots of sharp observations/reporting in this @karaswisher piece, but i have a few quick reactions —> Rules Won’t Save Twitter. Values Will. https://t.co/g7qL6CfVum
— David Kaye (@davidakaye) August 9, 2018
In the end, while technical solutions can help us navigate the often competing and contradictory realities of expression on the internet, the challenges we are face are deeply sociopolitical. Around the world, we’ll see states, platforms, and individuals continue to negotiate truth and reality online through both language and power.
We have at least two futures ahead of us. In one, our world will become more rigid, creating experiences that closely hew to regional and national boundaries. In another, the global community will band together to define, agree upon, and implement the new norms of internet speech and information. The internet was never truly global, and this inflection point is revealing just how fractured it truly is. Neither future is inevitable, but if we don’t act now, those cracks will turn into chasms, dividing our digital society for good.