Since the 2016 U.S. presidential election, an enormous amount of attention has been focused on the need to investigate “coordinated inauthentic behavior” online. This type of counterfeit digital behavior includes the creation of fake pages and identities on Facebook as well as the use of social bots (automated social media accounts) to circulate disinformation on a large scale. Policymakers, technologists, journalists, academics, and others have identified disinformation and other forms of propaganda online as major causes for global concern.
The targets of “information operations” or computational propaganda campaigns in the United States are often minority social groups, protected religious groups, and issue-focused voters. During the 2018 U.S. midterm elections, much of the online content directed at these groups was spread over social media by real people living in the United States, rather than by foreign actors or automated bot accounts.
Muslim Americans have been particularly subjected to disinformation and politically motivated attacks over social media. In some circumstances, online hate and spin campaigns have led to offline violence against this community. It is important to know who spreads this anti-Muslim propaganda and how Islamophobic disinformation circulates across and within different types of digital platforms.
Focusing on Gab—a fringe platform perhaps best known for its far-right user base—we tracked and analyzed disinformative and divisive content to better understand the complex dynamics involved in such activities. We found that posts on such fringe platforms often serve as a sign of what is to come in terms of divisive or deceitful content on mainstream social media platforms. (The full results are published in our GMF paper, Incubating Hate: Islamophobia and Gab.)
Fringe Actors and Islamophobic Content
Anti-Muslim conspiracy theories and memes have been widely shared through mainstream platforms like Reddit, Twitter, and Facebook. These platforms have recently established increasingly strict guidelines against white-supremacist speech. Although these policies were a long time coming and remain insufficient, they do hamper the creation and sharing of hate speech on large sites. Smaller, more fringe platforms such as Gab have sprung up to fill the void. They provide a space for those seeking to attack Muslims and other minority groups. Here, disinformation agents and Islamophobic actors can develop and share content with impunity, as well as start to coordinate attacks on vulnerable populations.
This content then often makes it into the mainstream. Some political candidates, pundits, and elected officials re-circulate disinformative, violent, and racist content originating on fringe websites. President Donald Trump has repeatedly retweeted known far-right figures and spread conspiracy theories produced on white-nationalist websites and platforms including Gab. State-run operations, including Russia’s Internet Research Agency, have leveraged this state of affairs, using Islamophobia to further polarize communities in the United States, the United Kingdom, and elsewhere in Europe.
But foreign states and domestic political leaders are not the only actors who have found it advantageous to encourage the spread of “inauthentic” Islamophobic content. Increasingly, other domestic groups are using similar tactics. A recent study showed that the same practices were leveraged by radical evangelical Christians (the Kullberg network) on Facebook. In this case, a network of Facebook pages was repurposed to spread conspiracies regarding Muslim immigrants. According to Harvard’s Shorenstein Center on Media, Politics, and Public Policy, conspiratorial allegations against Muslim candidates circulated on fringe news sites and between right-wing activists networks during the 2018 election season. This type of content also flowed on more mainstream platforms. In one instance, a political ad from Congressman Duncan Hunter’s campaign falsely alleged that his opponent, Ammar Campa-Najjar, had ties to Muslim terrorist groups.
Facebook’s recent decision not to vet political ads from politicians leaves the gates wide open for similar activities in the future. Importantly, though, the proliferation of Islamophobic content is not only happening on platforms like Facebook or in the West. Anti-Muslim disinformation has also been spread on platforms popular among Chinese users such as Weibo and WeChat.
Islamophobic Content on Gab
We used the Pushshift Gab archive to analyze posts made between July and October 2018 (the four the months prior to the U.S. midterm elections). Our research shows clear evidence of derogatory and highly inflammatory anti-Muslim content on Gab. Additionally, the Gab posts we examined contained a significant number of links to web outlets known to spread disinformation and hate regarding Islam and Muslims. Our analyses also reveal the crucial role that inter-platform communication—that is, links on Gab to YouTube, for instance—plays in spreading this problematic, often violent, information.
We assembled a list of 123 keywords meant to identify posts about Muslims and Islam—this included 30 derogatory terms garnered from hatebase.org. Most of the terms searched were neutral in nature: that is, “Muslim” or “Sharia.” We found and analyzed 188,764 posts made during the time period that contained these terms.
Our major findings included the following.
Four of the ten most-cited domains in our Gab dataset of Muslim-related content were anti-Muslim hate groups or sites that have documented records of disseminating disinformation. Two of these domains belonged to active anti-Muslim groups designated as hate groups by the Southern Poverty Law Center. Also among the top ten were hyper-partisan “news” sites including Breitbart, the Daily Mail, and Voice of Europe.
A quarter of users in the dataset used terms coded as derogatory.
The second-most used hashtag in the dataset was #BanIslam.
YouTube was a key dissemination vector for conspiratorial, Islamophobic disinformation on Gab. Twitter was the second-most cited domain in our dataset. While both have policies against hate speech and disinformation, this study highlights that much content at odds with these policies remains on these platforms and also spreads to other parts of the internet.
The fifteenth-most cited domain in the Gab dataset, streetnews.one, aggregated sensationalist news about Muslims, and was active only in the months leading up to the midterm elections. The website was promoted by a bot on Gab, and appears to have been a disinformation-campaign effort tailor-made for election season.
From July to October 2018, 27 percent of Gab users’ posts using keywords related to Islam or Muslims used terms that were derogatory and demeaning. Even when messages in this set did not contain specific derogatory terms, instead referring simply to “Muslim” or “Islam,” they were often demonizing. For example, a message from one user reads:
Why do the French Canadians Swedes and the Germans continue to accept the muslims when the muslims refuse to accept them? Mosques are hotbeds of caliphate propaganda weapons transfers & human trafficking. The French Canadians Democrats deny all that in spite of the volumes of evidence of jihadi strategy of spreading Islamic mental illness.
A significant number of the posts linked to domains (including infowars.com and barenakedislam.com) that actively spread disinformation, including links to known purveyors of purposely false content dressed up as “news.” The content we examined, and the sites linked to, reveal how anti-Muslim rhetoric and Islamophobia incubates on Gab. Known white supremacists and other extremists use Gab not only to communicate with one another on Gab but also to coordinate the spread of hate and political manipulation on mainstream social media sites like YouTube, Facebook and Twitter. In other words, Gab serves as a launch pad for disinformation and extremist content that then spreads across the wider web.
Fringe platforms like Gab play a key, and often overlooked, role in creating and disseminating disinformation and hateful messaging during major political events. This was certainly the case with regard to anti-Islamic content being spread on (and via) Gab during the 2018 midterms
As we continue to grapple with the effects of disinformation on democracy, and as the 2020 presidential election approaches, researchers, civil society groups, and others should monitor content on Gab and other fringe sites to help generate early warnings about forthcoming information operations and computational propaganda campaigns that may migrate to Facebook, Twitter, and YouTube. False information and hate speech harm the discourse of societies and, as a result, the ability of democracies to function. It is also important to note these malicious trends often target minority communities, such as Muslims, that are already more vulnerable. In view of this, we should pay close attention to the ways in which demographic groups—including minority and religious groups—are targeted with disinformation and political trolling.
The informational flows between sites like Gab and YouTube (or a somewhat similar link between 8kun/8chan and Reddit) are of crucial importance to the broader communication of hate and disinformation online. Researchers have only recently begun rigorously examining how disinformative content spreads from one platform to another. Sites like Gab do not stand alone, and must not be treated as isolated outlets of hate. In order to fully combat online Islamophobia and the larger problem of computational propaganda, policymakers and technology firms must address not just the ways that disinformation spreads within a given platform, but the dynamics of its spread across social media platforms.
Finally, the findings above were derived from datasets of an “open” social media platform, where posts are public. But there is a huge amount of similar discourse taking place on closed platforms, which are less accessible.
By moving from open platforms to closed ones—such as Facebook’s potential move toward encrypting all of its messaging platforms—the question of vetting and tracing disinformation on closed platforms, such as WeChat, Telegram, and WhatsApp takes on a great urgency. According to research from Columbia’s Tow Center for Digital Journalism, misinformative and disinformative content involving keywords such as “Muslim,” “Islam,” and “Terrorism” topped communications among Chinese American users of WeChat. In another example from India’s recent election, an investigative journalist found that almost 24 percent of the content disseminated among India’s Bharatiya Janata Party’s WhatsApp groups were Islamophobic, aiming to amplify hate and division between Hindus and Muslims.