https://adigitalnewdeal.org/wp-content/uploads/2021/02/smedialogos.jpg

OUR WORK

[Add tagline here]

Denis Charlet/Getty Images

Platform Regulation Should Focus on Transparency, Not Content

12/02/20


Despite efforts by digital platforms to curb the tsunami of disinformation surrounding U.S. elections, cyberspace remains awash in conspiracy theories and democracy-damaging disinformation. Meanwhile, terrorist attacks in France and Austria have spurred European efforts to clamp down on hatred and incitement to violence online. On Dec. 15, the European Commission is slated to release a draft set of comprehensive platform regulations. These European rules could become the standard for the global net—leaving the U.S. behind.

We have seen this before. American policymakers sat on the sidelines while the EU enacted its General Data Protection Regulation, which has become the de facto global standard. If America wants to help shape the rules of the road governing online discourse, it must step up and engage now.

What if, instead of pursuing conflicting paths, Europeans and Americans collaborated on a good governance framework for online platforms, adaptable for different legal systems and societal norms?

Right now, President-elect Joe Biden is setting his administration’s policy agenda and selecting personnel. Platform governance undoubtedly is on the table. By expressly endorsing trans-Atlantic collaboration on a digital framework, Biden would underscore his commitment to the trans-Atlantic alliance and ensure that American voices are heard. Similarly, Europeans could signal that they would welcome American engagement in developing the rules of the road for digital networks.

Government regulation of online harms is a daunting challenge. Not all toxic content is illegal, and lawmakers must tread carefully to avoid infringing on free expression and due process. And while the platforms enjoy their own free speech rights to set and enforce standards for their online communities, they also must respect widely recognized free expression exclusions for illegal content such as child pornography and incitement to violence.

There are two regulatory methods to curtail online harms that might constrain freedom of expression. The first is eliminating the platform’s safe harbor from liability for user content, which is exactly what multiple proposals currently before Congress would do. Section 230 of the Communications Decency Act protects platforms from lawsuits over third-party posts, and it has become a target of both the left and the right. The bills take polar opposite stands on the problem and the solution, whipsawing platforms between demands from the left that they remove blatantly false or manipulated speech and allegations from the right that conservative voices are deliberately censored. (On Tuesday evening, President Trump tweeted that he would veto the National Defense Authorization Act if Congress doesn’t repeal Section 230.) If online companies become liable for content that users post, platforms could well choose to eliminate popular services featuring user-generated content.

The second method is requiring immediate removal of specific kinds of content or face stiff penalties, as with Germany’s NetzDG, which incentivizes platforms to delete questionable yet legal content. This approach deputizes companies to adjudicate the legality of content without affording users judicial redress. (Indeed, the French Constitutional Council struck down a similar law for violating free expression.) Ominously, such laws also provide cover to authoritarian regimes to expand categories of speech subject to censorship. As a Chinese academic once proudly intoned, “there is no hate speech on our internet.”

But it is possible to tackle hate speech and disinformation without trampling on free expression, if the U.S. and Europe work together: by mandating transparency—with accountability—instead of regulating content. Require social media companies to provide greater transparency of their content moderation rules and procedures, including how their algorithms influence what users see, and enforce these disclosures through robust oversight. (Such a regulatory approach would complement, not replace, platform competition and privacy laws.)

Internet platforms are a black box. Mandating transparency would increase public pressure on platforms to improve users’ online experience. Researchers and regulators would gain access to essential information, resulting in better rules and oversight based on evidence, not assumptions. And online companies would be prodded to examine problems like algorithmic bias that they might prefer to ignore.

Legislation on both sides of the Atlantic should accomplish three things:

First, it should require platforms to publish clear terms of service and community standards and explain how they enforce them, how they resolve complaints of erroneous content moderation, and how users can appeal decisions. Posting moderation activity reports with standardized definitions and formats would facilitate cross-platform analysis.

Second, it should mandate that platforms disclose the impact (but not the source code) of their algorithms, including ones that flag items for review and ones that push content into a user’s newsfeed or video playlist. Algorithms can perpetuate biases inherent in the databases used to train them and amplify extremist content that drives engagement and ad views. Algorithms that fuel the virality of toxic material would warrant immediate attention.

And finally, it should authorize a government agency or an independent board comprised of industry and public members to oversee accountability. That body should draft and enforce disclosure rules, set procedures for researchers to access data, and establish protocols for independent compliance audits.

Naturally, details of these laws would vary based on their jurisdiction. But with this approach, Americans and Europeans can align their platform transparency and accountability obligations, including research and auditing protocols, adjusting for different legal systems and societal norms. Such cooperation would improve platform oversight, facilitate independent research, minimize costly and conflicting rules that fracture the global internet and—vitally—protect citizens’ rights.

By collaborating on a good platform governance framework, trans-Atlantic governments, internet companies, and civil society can fight toxic speech and disinformation—and offer democracies a vibrant alternative to authoritarian models of internet control.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.

Return to Our Work