For decades, many in the democratic world thought of the internet as an inherently “free and open” place, namely where information flows are unrestricted.[1] Cyberspace, the popular refrain goes, does not respect national borders.
But that idealized vision of the global internet is increasingly clashing with reality: one where many countries exert control over internet architecture within their borders.[2] While perhaps thought of as confined to autocracies like China and Russia, questions of content regulation — in the face of online harms like child pornography and copyright violations, Russian disinformation and terrorist propaganda — are cropping up with greater frequency in democratic nations. In the U.S. and elsewhere, the question now lingers: should governments regulate online content or intervene in some way to force private entities to regulate their own? While too complex an issue to capture everything in one article, there are a few key points that should center this debate in the United States.
First, it’s important to lay out the broader landscape of online content regulation as it stands today. Because contrary to popular belief, many democracies already have some regulations on internet content. The United States, through the Children’s Internet Protection Act, establishes guidelines on schools filtering out obscene or harmful online material.[3] Section 230 of the U.S. Communications Decency Act exempts online platforms from the liability of traditional publishers, though disputes exist about whether this should remain in place.[4] Countries from Canada to Israel have rules preventing the pirating of copyrighted content through the internet. South Korea has implemented internet filtering mechanisms to block online gambling.[5]
Contrary to popular belief, many democracies already have some regulations on internet content.
Generally speaking, clearly articulated and evenly enforced regulations on such online content as child pornography and intellectual property are widely accepted in democratic countries. They’re often seen as in the public interest and, depending on the execution, in line with democratic principles. Further, for nations with rule of law and with laws that more or less reflect the will of the people, there is also a general acceptance of taking laws previously established offline and enforcing them online, as in South Korea’s online gambling block.
That said, countries are looking to pass content laws in new areas. Content regulation proposals are now being expanded to restrict the creation and distribution of everything from harmful content writ large (i.e., self-harm videos) to terrorist content to deliberately misleading or outright fabricated news stories.
Australia recently passed a regulation around online terrorist content, for instance, in the wake of the massacre in Christchurch, New Zealand that left 50 dead and was live-streamed on social media.[6] The United Kingdom, to use another example, recently put forth a proposal for the regulation of online harms, such as restricting the sharing of violent content. In both cases, critics expressed concerns about over-censorship on the part of companies.[7] And in Australia’s case, there are notable concerns from citizens and tech watchdog groups that the government, in the wake of a national security crisis, is committing the all-too-familiar knee-jerk legislative overreach.[8]
Following evidence of Russian interference in the 2016 U.S. presidential election through social media bots and disinformation[9] — and, again, in the 2018 U.S. midterms[10] — there are calls from American policymakers and the public for the federal government to regulate online content to prevent the spread of fake news. Terrorist content and other harms, too, have contributed to public concern in the U.S. over the negative impacts of free content flows. Nonetheless, even for those who desire strong protection of the American electoral system and strong mitigation of Russian influence operations, many notable risks linger around the actual implementation of online content regulation.
For one, the United States obviously has the First Amendment. This constitutional protection of free speech and free assembly introduces a number of complexities when it comes to regulating online content. Citizens may share Russian disinformation online, yes, but is that not their right? How do you differentiate one person’s extremist speech from another person’s revolutionary content? What about sharing nudity? In 2016, for instance, Facebook came under fire for censoring a famous Vietnam War-era photo of a naked Vietnamese girl running from a napalm attack, before it reversed its decision.[11]
In a similar vein, there are already those who criticize private social media companies for enforcing their own sets of regulations around online content, in what many see as public spheres (i.e., Twitter, Facebook, etc.). Strengthening private regulations by enabling a company to self-determine and censor what it deems to be political fake news may be undesirable from a First Amendment perspective. Section 230 of the Communications Decency Act, in exempting online platforms from traditional publisher liability, was arguably meant to encourage platform self-management in this way.[12]
Strengthening private regulations by enabling a company to self-determine and censor what it deems to be political fake news may be undesirable from a First Amendment perspective.
Firms like Facebook and Twitter already face challenges when trying to quickly remove potentially offensive content, and are subject to criticism for what content they do and do not remove. Threat of punishment by the government may only encourage over-censorship by online platforms to avoid any chance of regulatory violation. Similarly, requiring deletion of certain content within narrow time frames might only encourage poor, rushed content reviews.[13]
Requiring companies to build out legal and technical capabilities for content filtering, at scale, also means constructing systems that can theoretically be expanded, repurposed, or redirected in the future. Shifts in political winds or calls for information control in the wake of a national security crisis could lead to broad proposals that capitalize on existing legal regimes and technical architectures to implement even tighter, or more arbitrary, forms of online content regulation. It’s different from free speech in the physical space in this way, since it is theoretically much easier to scale up digital content regulation systems once they’re built. And in the United States, which has already seen its fair share of overly sweeping national security legislation, desires to block Russian fake news and other online content must be balanced with the protection of civil liberties.
Contrary to longstanding narratives in certain policy circles, there are many forms of online content that should be carefully regulated in democracies, such as the criminalization and restriction of creating and distributing child pornography or terrorist recruiting materials. The harms that can occur on and through the internet continue to grow in volume and severity.
Yet the risk of government overreach and negative consequences around content regulation is great — exemplifying that this is not just about election interference or ISIS propaganda on the internet, but about whether and how the U.S. Constitution can be effectively applied to deal with this 21st century problem. When the United States has built its online brand on free flows of information, how can we manage the discourse space when that content violates rights or undermines democracy? The answer isn’t a straightforward one, but these dialogue points should help center the debate.
Justin Sherman is a Cybersecurity Policy Fellow at New America.
[1] https://www.newamerica.org/cybersecurity-initiative/reports/idealized-internet-vs-internet-realities/
[2] https://www.newamerica.org/cybersecurity-initiative/reports/digital-deciders/
[3] https://www.fcc.gov/consumers/guides/childrens-internet-protection-act
[4] https://slate.com/technology/2019/02/cda-section-230-trump-congress.html
[5] https://www.newamerica.org/cybersecurity-initiative/c2b/c2b-log/analysis-south-koreas-sni-monitoring/
[6] https://www.aljazeera.com/news/2019/03/australia-plans-tough-social-media-law-christchurch-attacks-190330095345474.html
[7] https://www.worldpoliticsreview.com/articles/27766/with-new-laws-to-filter-online-content-will-the-internet-remain-free-and-open
[8] https://www.lawfareblog.com/australias-new-social-media-law-mess
[9] https://www.npr.org/2017/11/05/562058208/how-russia-weaponized-social-media-with-social-bots
[10] https://arxiv.org/pdf/1902.00043.pdf
[11] https://www.theguardian.com/technology/2016/sep/09/facebook-reinstates-napalm-girl-photo
[12] https://slate.com/technology/2019/02/cda-section-230-trump-congress.html
[13] https://globalnetworkinitiative.org/gni-statement-draft-eu-regulation-terrorist-content/