
Big tech needs new, powerful government watchdogs to enforce real transparency
09/23/20
| Ashley Boyd

Here’s an unsettling exercise: Imagine if the food manufacturing industry could abandon all transparency. In this world, the Food and Drug Administration wouldn’t exist to monitor and label food. Essential regulation such as the Federal Food, Drug, and Cosmetic Act and the Food Safety Modernization Act would be absent. And the work of crusaders such as Alice Lakey and Upton Sinclair would have fallen on disinterested ears.
Without mandated transparency and the interventions it enables, consumers would shoulder grave consequences: filthy factories, routine food poisoning, never-ending listeria outbreaks, and worse.
Thankfully, this is as absurd as it is frightening: Food manufacturing transparency isn’t voluntary, it’s required in the public interest. But the exercise is useful when considering the transparency—or lack of it—in other industries. Industries such as consumer technology.
Facebook, YouTube, and other platforms can have as great an influence on us as the food we eat. We use these platforms for hours each day, and the information they curate shapes how we live in the world: whether we wear a mask, how we cast our vote, whether we trust current or future vaccine safety. But there’s no mandated transparency into how these influential platforms work. Regulators and essential watchdogs such as civil society researchers and journalists have little insight into why one YouTube video is recommended over another, or which demographics a Facebook ad campaign is targeting.
This transparency is absent when we need it most. Amid a pandemic and on the edge of a presidential election, health disinformation and misleading political ads are crowding our feeds. Until regulators and other watchdogs can better understand platforms’ recommendation and advertising systems—in short, their AI—we can’t fully understand what’s wrong with them, never mind start finding solutions. But is this sort of transparency attainable?
Facebook is perhaps the platform that has struggled most with disinformation and other harmful content. Unsurprisingly, it’s also one of the most opaque platforms. What appears in Facebook feeds influences billions. And yet the only people who truly understand the chemistry of the newsfeed work for Facebook and have a disincentive in revealing useful information that may call into question the product’s features and company policies. Watchdogs instead must rely on incomplete data or imprecise tools. This problem was recently crystallized when Kevin Roose, a New York Times reporter, tweeted an analysis of what content was trending on Facebook. Facebook’s Head of Newsfeed replied, chastising Roose for using incomplete data. Roose later shared a trenchant summary of the Twitter quarrel: “Most of the pushback I’m getting amounts to ‘your take would be more accurate if it included [secret data I don’t have and can’t get].'”
YouTube is another platform where transparency is necessary but absent. YouTube’s recommendation engine is one of the most influential AIs on the planet: It is responsible for 70% of total viewing time on the site. This AI has also resulted in recommending to users extreme, violent, and hate-filled content. Many outlets have reported on the recommendation engine’s history of sending users down hazardous rabbit holes. Mozilla, where I work, revealed similar examples in research we did in 2019.
YouTube claims to be addressing this problem. But because YouTube’s recommendation data is locked down, watchdogs can’t confirm whether things are getting better or make suggestions about how to accelerate the pace of improvements. As on Facebook, watchdogs must rely on incomplete data or build their own tools. It’s a highly imperfect solution to a dangerous problem.
It’s clear that transparency is essential. But how do we get there? Regular, comprehensive data sharing from internet platforms won’t happen voluntarily. Platforms are opaque for a reason: While more transparency is in the public interest, it might also disrupt a company’s bottom line. So we need to focus corporate accountability efforts squarely on the topic of transparency.
In the U.S., emerging ideas such as the German Marshall Fund’s Digital New Deal would mandate platform data sharing and cross-platform codes of conduct. The Digital New Deal also urges virality circuit breakers on platforms—mechanisms that halt the spread of viral content until it can be scrutinized for falsehoods or hate. “A big part of [disinformation] could be handled with transparency, which is very free-speech friendly,” Karen Kornbluh, a director at the fund and a former Mozilla Fellow, explained to Marketplace. Similarly, the proposed Honest Ads Act would bring to tech platforms some of the same transparency we expect in traditional media, such as political ad disclosures.
There are bright spots elsewhere in the world. In Europe, the EU’s newly evolved Code of Practice on Disinformation urges more transparency: “A more effective scrutiny of ad placements on platforms’ own services would require a better integration of . . . research activities,” it reads. And cities such as Amsterdam and Helsinki are asking the European Commission to adopt strict AI procurement policies—in other words, to only work with companies and services that are significantly transparent. Says Mikko Rusama, chief digital officer of Helsinki: “Without transparency there is no trust. Without trust there is no need for AI.”
In addition to meaningful, industry-wide regulation, consumers can drive action through advocacy. Consumers need to be vocal about what they expect from platforms. If Twitter’s “trending topics” feature is spreading disinformation, consumers should (and did) call on the company to turn it off pending independent examination. And if a platform emerges as more transparent than its competitors—as TikTok increasingly appears to be doing via its Transparency Center—then consumers can sign up for that platform over others. At scale, this turns transparency into a trend, and then a competitive advantage.
Transparency isn’t a panacea, but it’s an integral first step toward understanding what’s wrong with tech platforms, and then proposing fixes. When regulators, independent watchdogs, and consumers can better scrutinize how these platforms tick, platforms’ AI can be held to a much higher standard—just like the food we eat.