https://adigitalnewdeal.org/wp-content/uploads/2021/02/smedia-convertible.jpg

OUR WORK

[Add tagline here]

Rawpixel/iStock/Getty Images Plus

How Well Did Twitter, Facebook, and YouTube Handle Election Misinformation?

11/10/20
| Ellen P. Goodman
| Karen Kornbluh


In the early hours of the Wednesday after Election Day, as President Donald Trump inaccurately claimed victory in several states and leveled charges that his opponents were “trying to steal the election,” Twitter took the kind of action against disinformation that many had been urging for years. It labeled and obscured tweets, prevented retweets and likes, and stopped recommending false content. Facebook also applied labels to similar posts and shut down a “Stop the Steal” Facebook group organized around armed opposition to made-up voter fraud that had started accumulating new members at the unprecedented rate of 242 per minute.

There has been plenty of misinformation before and after the election. Facebook posts falsely asserting that thousands of dead Pennsylvanians were voting reached up to 11.3 million people, and Spanish-language disinformation may have played a substantial role in Florida results. At the same time, the platforms did adopt and enforce election integrity procedures, showing they could at least sometimes put out disinformation flares before they blazed out of control. How did those procedures work out? Let’s walk through them.

Platform interventions fall into three groups of risk-reduction tools that we and others have been urging: adding “friction” to content spread, boosting the “signal” of authoritative news sources, and enforcing platform policies announced in advance. Each of the major social media platforms—Facebook, YouTube, and Twitter—deployed these tools better than ever, but still not consistently, fast, or transparently enough.

By adding friction and slowing transmission, the platforms can move toward a media environment optimized for quality. To varying degrees, the platforms experimented with frictive responses to the spread of misinformation, including labels, click-through prompts to promote reflection, limits on sharing, circuit breakers to cut viral transmission (something Facebook reportedly decided to try in August around COVID-19 misinformation), and algorithmic demotion.

Labels got quite a workout over the past week. Twitter and Facebook had pre-committed to label posts with premature declarations of victory. Facebook included links to the Bipartisan Policy Center, without actually contradicting the posts. It often appended the same links to President-elect Joe Biden’s truthful posts. Using the same label on truths, half-truths, and lies is what media critic Jay Rosen calls the “view from nowhere.” It’s a bothsidesism in labeling that saps the intervention of any meaning.

Joe Biden post that reads "I ask people to stay calm. The process is working. The count is being completed" with a Facebook label under it that says "votes are being counted."

Facebook was slightly more direct, though still not confrontational, with false statements about the electoral process.

Trump post that says “ANY VOTE THAT CAME IN AFTER ELECTION DAY WILL NOT BE COUNTED!” with a Facebook label under it that says “Differences between final results and initial vote counts are due to it taking several days after polls closed to ensure all votes are counted.”

Twitter took a more aggressive approach to labeling and limiting distribution. Depending on the claim, it used interstitials to counter misinformation at the bottom of a tweet.

Trump tweet claiming victory in Pennsylvania, Georgia, and North Carolina, with a Twitter label underneath that says “Official sources may not have called the race when this was Tweeted”

Twitter wrapped other falsehoods in a warning label—a more frictive solution that requires the user to dig deeper to see the content.

Trump tweet hidden by a Twitter label that says “Some or all of the content shared in this Tweet is disputed and might be misleading about an election or other civic process.”

YouTube, like Facebook, was fairly noncommittal in its labels, often appending a “See the latest on Google” for more information.

Screenshot from YouTube of a CNBC election broadcast, with a label under the video that says "Results may not be final. See the latest on Google"

Before the election, Twitter changed its defaults—for example, prompting users to quote-tweet with commentary instead of simply retweeting. The nudge to quote and comment could in theory promote what Daniel Kahneman calls System 2 thinking and replace excited rage-tweeting with considered reflection. Twitter took this bit of friction one step further with special policies for misinformation coming from influential accounts (including from U.S. political figures and U.S.-based accounts with more than 100,000 followers). For those, it turned off the ability to reply or retweet and made the default quote-tweet function stickier. And when people attempt to retweet content labeled as misleading, Twitter points them to credible information. The problem with setting a follower threshold for more stringent action is that influence nodes may not correlate with followers. There was no label on a One America News anchor’s tweet sharing the network’s YouTube video announcing Trump’s “win” because the anchor has fewer than 100,000 followers; her post was retweeted by OAN’s account with more than 1 million followers.

Neither Facebook nor YouTube took steps as aggressive as Twitter’s to slow the circulation of misinformation, although after the election Facebook said it would move to reduce the circulation of misinformation and is adopting new frictive policies for groups with histories of misinformation.

The platforms also experimented with limiting sharing. Facebook limited to five the number of  chats a person can post to in Messenger (following an approach that was successful in reducing viral spread in WhatsApp) and suspended recommendations for political and social issue groups. Also to limit reach, Twitter turned off recommendations in the timeline, so users only see posts and retweets from accounts they follow.

Both Twitter and Facebook tried out the circuit breaker concept we have recommended. What Facebook calls its viral content review system flags fast-moving content for review. It would be good to get some data on this system’s effectiveness. We know of only one case where the circuit breaker was tripped and Facebook limited circulation while an article was under review: the New York Post Hunter Biden story, which was spreading fast in violation of policies about hacked personal information. All the platforms say they demote misinformation in feeds, but demotion is difficult to verify and, at least for Facebook, seemingly of little utility given that the top-performing posts during election week bore misinformation warning labels. The most popular post the day after the election was the president’s false claim of fraud.

In the months leading up to the election, in addition to updating content moderation policies, all three major social media platforms announced changes in advertising. Twitter announced it would no longer run political ads at all, while Google announced it would not allow microtargeting and would temporarily suspend election-related ads after polls closed. Facebook instead said it would stop accepting new political ads on Oct. 20, a policy that crashed into campaigns trying to tee up ads before the blackout period, and follow Google in not allowing political ads immediately after the election.

YouTube’s content moderation approach was itself centered on advertising. It prohibited content encouraging others to interfere with democratic processes, but its policy dealing with false declarations of victory or fraud was limited to removing ads from videos with false information, demonetizing but not de-platforming them. As a result, problematic videos remained up, accumulating views. Kevin Roose of the New York Times found a YouTube livestream promoting voter fraud claims with 3.5 million views. The company did remove livestreams with fake election results, but not before one stream had 26,000 viewers. The video on OAN claiming Trump won racked up hundreds of thousands of views. YouTube said its recommendation systems limited the spread of misinformation, but we lack the data to know how much those limits achieved. What independent research has found, however, is that between Nov. 3 and Nov. 5, YouTube channels with a minimum of 10,000 subscribers were getting nearly 100 million views of videos keyed to “election fraud.”

As we argue in our Digital New Deal initiative, it’s not enough for platforms to reduce noise—they have to signal-boost credible information as well. Scholars Yael Eisenstat and Daniel Kreiss issued a paper before the election urging platforms to “flood the zone” and not expect citizens to click on links. All the platforms seemed to have done this to some extent. Facebook launched its Voting Information Center to push information sourced from state election officials and other nonpartisan civic organizations. This information supposedly followed all posts that mentioned voting or elections. Earlier in the fall, Twitter had established its election hub, which relies on users to pull authoritative information about the election. But during the election itself, Twitter started to push public service announcements—context in Trends and authoritative local information about voting and vote tabulations.

Twitter timeline of election results stories with a note at the top from Twitter on how and why the platform is "Showing Context on Trends"

YouTube also worked to boost signal to some degree. It elevated authoritative sources, including news publishers like CNN and Fox News (news division), for election-related news and information queries in search results and “watch next” panels. That there was clearly authoritative local information about voting and elections made the platforms’ task easier. It becomes harder in other areas of civic integrity where authority is more contested.

Overall, Facebook, YouTube, and Twitter owned up to their civic responsibilities heading into a national election and conducted a natural experiment in reducing the risks of disinformation. What we can tell so far is that although they were far more assertive than ever in addressing disinformation, the sludge still dominated the debate. Between Nov. 4 and Nov. 6, nine of the top 10 Facebook posts (all by Donald Trump and evangelist Franklin Graham) bore warning labels for misinformation. Twitter’s actions to limit the sharing of misinformation by prominent accounts did seem to have slowed the circulation of misinformation, according to the Election Integrity Partnership. Certainly, none of the top Twitter posts from the same time period has a warning label. But false claims gathered steam as they crossed networks. Most famously, the claim that Sharpie pens vitiated Arizona ballots circulated widely even after being debunked by state officials.

If we look at the circulation of content from information laundering sites, the conclusion is the same. We compared interactions with content from 10 outlets that repeatedly publish false election-related content (such as OAN) and from nine high-credibility outlets (such as AP News) during election week with interactions from the months prior, and we found that false-content outlets had slightly higher average Facebook interactions (likes, comments, shares) per article during election week than during the prior three months. In fact, they increased interactions per article by 6 percent. High-credibility outlets had slightly lower interactions per article during election week, with a 6 percent drop. On Twitter, the patterns were directionally similar, and even more pronounced. False-content outlets saw a 68 percent jump in shares (original posts or retweets) per article from verified accounts. High-credibility outlets saw a 5 percent decrease. For YouTube, both false-content and high-credibility outlets increased their likes per video, but false-content outlets increased theirs by 98 percent, while high-credibility ones increased theirs by 16 percent.

It’s still too soon to truly assess the platforms’ performance, and we could do more if they gave researchers more access to data. But these early results point to the need for platforms to level up. Twitter, Facebook, and YouTube—in that order—seem to have accepted the fact that they are media companies with responsibilities and experimented with promising injections of friction, signal boost, and policies. Only they know what fires they put out. But the fires that were allowed to spread—and continue to blaze—suggest more needs to be done as a pandemic continues and civic trust sags after a bruising election.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.

Return to Our Work