On a morning in May, 2019, forty-three lawyers, academics, and media experts gathered in the windowless basement of the NoMad New York hotel for a private meeting. The room was laid out a bit like a technologist’s wedding, with a nametag and an iPad at each seat, and large succulents as centerpieces. There were also party favors: Facebook-branded notebooks and pens. The company had convened the group to discuss the Oversight Board, a sort of private Supreme Court that it was creating to help govern speech on its platforms. The participants had all signed nondisclosure agreements. I sneaked in late and settled near the front. “Clap if you can hear me,” the moderator, a woman dressed in a black jumpsuit, said.
Since its founding, in 2004, Facebook had modelled itself as a haven of free expression on the Internet. But in the past few years, as conspiracy theories, hate speech, and disinformation have spread on the platform, critics have come to worry that the company poses a danger to democracy. Facebook promised to change that with the Oversight Board: it would assemble a council of sage advisers—the group eventually included humanitarian activists, a former Prime Minister, and a Nobel laureate—who would hear appeals over what kind of speech should be allowed on the site. Its decisions would be binding, overruling even those of Mark Zuckerberg, the company’s founder. Zuckerberg said he had come to believe that a C.E.O. shouldn’t have complete control over the limits of our political discourse. “Maybe there are some calls that just aren’t good for the company to make by itself,” he told me.
In 2019, Facebook agreed to let me report on the process, and I spent eighteen months following its development. Last month, the board ruled on its first slate of cases, which dealt with, among other topics, the glorification of Nazis and misinformation about the coronavirus pandemic. In the next few months, it will decide an even larger question: whether Donald Trump should be cut off indefinitely from his millions of followers for his role in inciting the insurrection at the Capitol, on January 6th. Nathaniel Persily, a law professor at Stanford, told me, “How the board considers the issues and acts in that case will have dramatic implications for the future of the board, and perhaps for online speech in general.”
In the beginning, Facebook had no idea how the board would work. To come up with ideas, the company held workshops with experts in Singapore, New Delhi, Nairobi, Mexico City, Berlin, and New York. “My job was to go all over the world and get as much feedback as possible,” Zoe Darmé, who oversaw the consultation process, told me. At the workshop in New York, in the hotel basement, participants sat at tables of eight or nine and ran simulations of cases. I sat between Jeff Jarvis, a journalism professor, and Ben Ginsberg, a Republican lawyer who represented George W. Bush in Bush v. Gore.
For our first case, the moderator projected a picture of a smiling girl in a yearbook photo, with a cartoon thought bubble that read “Kill All Men.” Facebook had removed the post for violating its hate-speech rules, which ban attacks based on “sex, gender identity.” To many, this seemed simplistic. “It’s a joke,” one woman said. “There has to be an exception for humor.” Facebook’s rules did include a humor exception, for instances in which the user’s intent was clear, but it was difficult to discern this person’s motivation, and attendees worried that a broad carve-out for jokes could easily provide cover for hate speech. Carmen Scurato, who works at Free Press, an Internet-advocacy organization, pointed out the historical disadvantage of women, and argued that hate-speech policies ought to take power dynamics into account. In the end, the group voted to restore the photo, though no one knew exactly how to write that into a rule.
This kind of muddy uncertainty seemed inevitable. The board has jurisdiction over every Facebook user in the world, but intuitions about freedom of speech vary dramatically across political and cultural divides. In Hong Kong, where the pro-democracy movement has used social media to organize protests, activists rely on Facebook’s free-expression principles for protection against the state. In Myanmar, where hate speech has contributed to a genocide against the Rohingya, advocates have begged for stricter enforcement. Facebook had hoped, through the workshops, to crowdsource beliefs about speech, but the results were more contradictory than anticipated. In New York, for example, sixty per cent of people voted to reinstate the “Kill All Men” post, but only forty per cent did so in Nairobi. Amid other theories, Darmé speculated, “Where countries are perhaps more concerned about safety, because they live in an area with less rule of law—and therefore there’s a chance of a group actually maybe trying to kill all men—there’s less concern about free speech.” The full explanation is likely more complex; regardless, the divergent results underscored the difficulty of creating a global court for the Internet.
Some of the workshops devolved into disarray. In Singapore, Nairobi, and New Delhi, a few participants refused to sign the nondisclosure agreements, protesting Facebook’s lack of transparency; in Germany, someone commandeered the microphone and berated the company for killing democracy. “We had to learn to put on our body armor,” Darmé said. In New York, the session remained civil, but just barely. Some participants thought that the board would be ineffectual. “The whole thing seemed destined for failure,” Sarah T. Roberts, a professor of information studies at U.C.L.A., told me. “Skeptics will think it’s captured by the corporate interest of Facebook. Others will think it doesn’t do enough, it’s a pseudo-institution.” Some predicted the board would come to have grand ambitions. Tim Wu, a law professor at Columbia, said, “If the board is anything like the people invited to New York, I wouldn’t be surprised if it got out of control and became its own little beast that tried to change the world one Facebook decision at a time.”
Participants had been instructed to use an app called Slido to submit questions for group discussion, which could be voted up or down on the agenda. The results were projected on a screen at the front of the room. The app had worked well abroad, but in New York it became a meta-commentary on content moderation. Sophisticated questions about “automatic tools for takedowns” and the “equity principle with diverse communities” were soon overtaken by a joke about “Game of Thrones.” Posts were initially anonymous, but users quickly found a way around the system; “Harold” wrote, “I figured out how to identify self,” which provoked laughter. The moderator shouted to regain control of the room. In the midst of the chaos, someone posted, “Can we abandon Slido and talk?,” which quickly accumulated likes.
The idea that Facebook, like a fledgling republic, would need to institute democratic reforms might have seemed silly a decade ago. In 2009, shortly after the company was criticized for quietly changing its terms of service to allow it to keep users’ data even after they deleted their accounts, it released a video of Zuckerberg, clad in an uncharacteristic button-up shirt and a tie, announcing a “new approach to site governance.” People would be able to vote on Facebook’s policies; the company called it “a bold step toward transparency.” In the first referendum, on whether to change the terms of service, only 0.32 per cent of users voted. “In its own eyes, Facebook has become more than merely a recreational website where users share photos and wish each other a happy birthday,” a columnist for the Los Angeles Times wrote. “It is now a global body of citizens that should be united and protected under a popularly ratified constitution. But it’s hard to have a democracy, a constitution or a government if nobody shows up.” In 2012, the project was quietly shuttered, and, as with Crystal Pepsi, Google Wave, and the Microsoft Zune, no one remembers that it existed.
This was still a hazily optimistic time for Facebook. The company promised to “give people the power to share and make the world more open and connected.” As more users joined tech platforms, companies instituted rules to sanitize content and keep the experience pleasant. Airbnb removed housing ads that displayed Nazi flags; Kickstarter disallowed crowdfunding for “energy food and drinks”; Etsy told users to be “helpful, constructive, and encouraging” when expressing criticism. Facebook hired content moderators to filter out pornography and terrorist propaganda, among other things. But, because it saw itself as a “neutral platform,” it tended not to censor political speech. The dangers of this approach soon became apparent. Facebook now has some three billion users—more than a third of humanity—many of whom get their news from the site. In 2016, Russian agents used the platform in an attempt to sway the U.S. Presidential election. Three years later, a white supremacist in New Zealand live-streamed a mass shooting. Millions of people joined groups and followed pages related to QAnon, a conspiracy theory holding that the world is controlled by a cabal of Satan-worshipping, pedophilic Democrats. The First Amendment has made it difficult for the U.S. government to stop toxic ideas from spreading online. Germany passed a law attempting to curb the dissemination of hate speech, but it is enforceable only within the country’s borders. As a result, Facebook has been left to make difficult decisions about speech largely on its own.
Over time, the company has developed a set of rules and practices in the ad-hoc manner of common law, and scholars have long argued that the system needed more transparency, accountability, and due process. The idea for the Oversight Board came from Noah Feldman, a fifty-year-old professor at Harvard Law School, who has written a biography of James Madison and helped draft the interim Iraqi constitution. In 2018, Feldman was staying with his college friend Sheryl Sandberg, the chief operating officer of Facebook, at her home in Menlo Park, California. One day, Feldman was riding a bike in the neighboring hills when, he said, “it suddenly hit me: Facebook needs a Supreme Court.” He raced home and wrote up the idea, arguing that social-media companies should create “quasi-legal systems” to weigh difficult questions around freedom of speech. “They could cite judicial opinions from different countries,” he wrote. “It’s easy to imagine that if they do their job right, real courts would eventually cite Facebook and Google opinions in return.” Such a corporate tribunal had no modern equivalent, but Feldman noted that people need not worry: “It’s worth recalling that national legal systems themselves evolved from more private courts administered by notables or religious authorities.” He gave the memo to Sandberg, who showed it to Zuckerberg. For a few years, Zuckerberg had been thinking about establishing a “legislative model” of content moderation in which users might elect representatives to Facebook, like members of Congress. A court seemed like a better first step.
In November, 2018, Feldman gave a short presentation to Facebook’s corporate board, at Zuckerberg’s invitation. “I didn’t feel like I was convincing my audience,” he told me. Feldman recalled that some members felt such a body wouldn’t sufficiently improve the company’s legitimacy; others worried that it could make decisions that would contradict Facebook’s business interests. A few minutes in, Zuckerberg defended the proposal. He noted that a huge proportion of his time was devoted to deliberating on whether individual, high-profile posts should be taken down; wouldn’t experts be better at making those decisions? The idea remained controversial, but Facebook’s corporate structure allows Zuckerberg to make unilateral decisions. Soon after, he ordered the project to begin. “I was kind of stunned,” Feldman told me. “Like, holy shit, this is actually going to happen.”
One day in June, 2019, an Uber dropped me off at Facebook’s campus, in a parking lot full of Teslas. For the past couple of years, while working as a law professor, I had been researching how tech companies govern speech. That morning, I headed into MPK 21, a five-hundred-thousand-square-foot building designed by Frank Gehry, with a rooftop garden inhabited by wild foxes. (Signs discourage interaction with them.) Walls are plastered with giant posters bearing motivational phrases like “Nothing at Facebook Is Somebody Else’s Problem” and “The Best Way to Complain Is to Make Things.” When you visit, you register at a touch-screen kiosk and sign a nondisclosure agreement pledging that you won’t divulge anything you see. The company knew that I was coming as a reporter, but the woman at the desk didn’t know how to print a pass without a signed agreement; eventually, another employee handed me a lanyard marked “N.D.A.” and said, “We’ll just know that you’re not under one.”
I began by shadowing Facebook’s Governance and Strategic Initiatives Team, which was tasked with creating the board. The core group was made up of a dozen employees, mostly in their thirties, who had come from the United Nations, the Obama White House, and the Justice Department, among other places. It was led by Brent Harris, a former consultant to nonprofits who frequently arrived at our meetings eating a granola bar. The employees spent much of their time drafting the board’s charter, which some called its “constitution,” and its bylaws, which some called its “rules of the court.” During one meeting, they used pens, topped with a feather, to evoke the quills used by the Founding Fathers.
The group was young and highly qualified, but it was surrounded by tech executives who sometimes became actively involved. Early drafts of the charter included a lot of dry, careful legal language, but in later versions some of it had been stripped out. “Feedback is coming from people high in the company, who are not lawyers,” Harris told me, during one meeting. I noted that someone had changed all references to “users” in the charter to “people,” which seemed to imply that the board governed not only Facebook’s customers but everyone in the world. Harris exchanged glances with another employee. “Feedback is coming from people very high in the company,” he said. I later learned from the team that Zuckerberg had been editing the charter to make it “more approachable.”
Employees on the governance team sometimes referred to themselves as “true believers” in the board. Kristen Murdock, who was an intelligence officer in the Navy before coming to Facebook, told me, “This is going to change the face of social justice on the Internet.” But some executives did not hold it in the same regard. Elliot Schrage, then the head of global policy and communications, told people involved that he was skeptical of the project and did not think it could be improved. (Schrage claimed, through a spokesperson, that he was “fully supportive of efforts to improve governance” but that he “did have concerns about how to build a credible program.”) Nick Clegg, a former Deputy Prime Minister of the U.K. who was supervising the governance team, told me, in 2019, that he was reluctant to let the board weigh in on sensitive topics, at least early on. “I would love to think that we’d have a relatively uncontroversial period of time,” he said. At one point, a director of policy joked about ways to make the board seem independent, asking, “How many decisions do we have to let the Oversight Board win to make it legit?”
In time, the workings of the court came together. The board originally included twenty members, who were paid six-figure salaries for putting in about fifteen hours a week; it is managed by an independent trust, which Facebook gave a hundred and thirty million dollars. (“That’s real money,” a tech reporter texted me. “Is this thing actually for real?”) According to Facebook, as many as two hundred thousand posts become eligible for appeal every day. “We are preparing for a fire hose,” Milancy Harris, who came to the governance team from the National Counterterrorism Center, said. The board chooses the most “representative” cases and hears each in a panel of five members, who remain anonymous to the public. Unlike in the Supreme Court, there are no oral arguments. The user submits a written brief arguing her case; a representative for the company—“Facebook’s solicitor general,” one employee joked—files a brief explaining the company’s rationale. The panel’s decision, if ratified by the rest of the members, is binding for Facebook.
The “most controversial issue by far,” Darmé told me, was how powerful the board should be. “People outside the company wanted the board to have as much authority as possible, to tie Facebook’s hands,” she said. Some wanted it to write all of the company’s policies. (“We actually tested that in simulation,” Darmé said. “People never actually wrote a policy.”) On the other hand, many employees wondered whether the board would make a decision that killed Facebook. I sometimes heard them ask one another, in nervous tones, “What if they get rid of the newsfeed?”
As a result, the board’s powers were limited. Currently, users can appeal cases in which Facebook has removed a post, called “take-downs,” but not those in which it has left one up, or “keep-ups.” The problem is that many of Facebook’s most pressing issues—conspiracy theories, disinformation, hate speech—involve keep-ups. As it stands, the board could become a forum for trolls and extremists who are angry about being censored. But if a user believes that the company should crack down on certain kinds of speech, she has no recourse. “This is a big change from what you promised,” Evelyn Douek, a Harvard graduate student who consulted with the team, fumed, during one meeting. “This is the opposite of what was promised.” Users also currently can’t appeal cases on such issues as political advertising, the company’s algorithms, or the deplatforming of users or group pages. The board can take cases on these matters, including keep-ups, only if they are referred by Facebook, a system that, Douek told me, “stacks the deck” in Facebook’s favor. (Facebook claims that it will be ready to allow user appeals of keep-ups by mid-2021, and hopes eventually to allow appeals on profiles, groups, and advertising as well.)
Perhaps most important, the board’s rulings do not become Facebook policy in the way that a Supreme Court precedent becomes the law of the land. If the board decides that the company should remove a piece of content, Facebook is obligated to take down only that post; similar posts are taken down at Facebook’s discretion. (The company states that it will remove “identical posts with parallel context” based on its “technical and operational capacity.”) Policy recommendations are only advisory. This significantly narrows the board’s influence. Some hope that the recommendations will at least exert public pressure on the company. “Facebook undermines its goals and its own experiment if it restricts the impact of the board’s decisions or just ignores them,” Douek told me. Others felt let down. “It’s not what people told us they wanted,” Darmé said. “They wanted the board to have real power over the company.”
In August, 2019, the governance team met with advisers, over snacks and seltzer, and discussed who should sit on the board. A security guard stood outside, making sure that no one explored the offices unattended. (He stopped me on my way out and told me that I couldn’t leave without an escort.) The people selected for the board would determine its legitimacy, and how it ruled, but the experts had trouble agreeing on who could be trusted with this responsibility. One attendee suggested letting the first board members choose the rest, to preserve their independence from the company. Lauren Rivera, a professor at Northwestern’s business school, cautioned against this approach: “It’s empirically proven that when you have a group self-select, in the absence of any kind of guidance, they just pick more people that look like them.” The experts then began giving their own ideas. Journalists said that the board should be mostly journalists. International human-rights lawyers said that it should be all international human-rights lawyers. Information scientists said that it should be “anyone but lawyers.” A white man at a think tank said that it should be populated with “regular people.”
Ultimately, to select its would-be judges, Facebook opened a public portal, which received thousands of nominations. It got suggestions of candidates from political groups and civil-rights organizations. It also used its initial workshops to scout potential candidates and observe their behavior. “The thing about the global consultancy process is that it was also maybe Facebook’s first true global recruiting process,” Brent Harris told me later. Jarvis, the journalism professor at the New York workshop, said, “It’s so Facebook of them.” He added, “They never called me. I wonder what I said.”
The number of people that Facebook planned to have on the board kept changing. I imagined the team sweating over a “Law & Order”-style corkboard of photographs. At one point, Kara Swisher, a tech journalist who has been critical of Facebook, nominated herself. “I would like to formally apply to be judge and jury over Mark Zuckerberg,” she wrote in the Times. Facebook didn’t take her up on it. A reporter sent me an encrypted text saying he had two sources telling him that Barack Obama would be on the board. When I asked Fariba Yassaee, who oversaw the search for members, about high-profile candidates, she smiled. “The people we’re looking at are incredibly impressive, but they also are able to do the hard work that being on the board will entail,” she said. “They need to be team players.” In May, the first board members were announced. They included Helle Thorning-Schmidt, a former Prime Minister of Denmark; Catalina Botero Marino, a former special rapporteur for freedom of expression to the Inter-American Commission on Human Rights; Alan Rusbridger, the former editor of the Guardian; and Tawakkol Karman, an activist who won the Nobel Peace Prize, in 2011, for her role in Yemen’s Arab Spring protests.
The slate was immediately controversial. Some employees were angry about the appointment of Michael McConnell, a retired federal judge appointed by George W. Bush. In 2000, McConnell argued before the Supreme Court that the Boy Scouts should be allowed to exclude gay people. (This year, during a Zoom class at Stanford Law School, he recited a quote that included the N-word. He defended this as a “pedagogical choice,” but pledged not to use the word again.) “We all knew what people outside and inside the company were expecting: board members who respect all people and all cultures, including respect for L.G.B.T.Q. rights,” Darmé, who had since left Facebook, told me. “Can you really have someone on the board who’s argued something like this all the way to the highest court in the land?” Others believed that, considering that half of the country is Republican, disregarding such views would be undemocratic. “It is not a thing you can really say right now, but the vast majority of the world is much more ideologically conservative than Menlo Park,” Harris said. “How do you reflect that on the board? Or do you decide, No, we’re just not going to have that?”
People familiar with the process told me that some Republicans were upset about what they perceived to be the board’s liberal slant. In the months leading up to the appointments, conservative groups pushed the company to make the board more sympathetic to Trump. They suggested their own lists of candidates, which sometimes included members of the President’s family, most notably Ivanka and the President’s sons. “The idea was, either fill this board with Trump-supporting conservatives or kill it,” one person familiar with the process said. In early May, shortly after the board members were announced, Trump personally called Zuckerberg to say that he was unhappy with the makeup of the board. He was especially angry about the selection of Pamela Karlan, a Stanford Law professor who had testified against him during his first impeachment. “He used Pam as an example of how the board was this deeply offensive thing to him,” the person familiar with the process said. Zuckerberg listened, and then told Trump that the members had been chosen based on their qualifications. Despite the pressure from Trump, Facebook did not change the composition of the board. (Trump declined to comment.)
Several candidates declined to be considered. Jameel Jaffer, the director of the Knight First Amendment Institute, told me, “I was worried, and still am, that Facebook will use membership on the board as a way of co-opting advocates and academics who would otherwise be more critical of the company.” But others saw it as a way to push Facebook in the right direction. Julie Owono, a board member and the head of Internet Sans Frontières, told me, “I had expressed interest in joining the board because I feel, and still do, that Facebook is doing a terrible job on hate speech in environments that are already very tense.” Thorning-Schmidt, the former Prime Minister of Denmark, told me, “I needed to know this would be independent from Facebook and that Facebook would commit to following our decisions.” She met with Zuckerberg and asked that he give his word: “I had to hear it from Mark Zuckerberg myself. And he said yes.”
Critics of the board believe that it will prove to be little more than a distraction. “I think it’s a gigantic waste of time and money,” Julie Cohen, a law professor at Georgetown, said. She believes that its star-studded panel and lavish funding will prevent regulation while allowing the company to outsource controversial decisions. And, since it can currently rule only on individual posts, the board can’t address Facebook’s most fundamental problems. In mid-May, for example, a video called “Plandemic,” which claimed that vaccine companies had created COVID-19 in order to profit from the pandemic, went viral on the platform. It was taken down within a few days, but by that time it had already been seen by 1.8 million people. Ellen P. Goodman, a law professor at Rutgers, believes that Facebook needs to add more friction to the circulation of content; anything catching fire, she said, should be subject to a “virality disruptor” that stops further spread until the content has been reviewed. Zephyr Teachout, a law professor at Fordham, says that the company should do away with targeted advertising, which incentivizes the promotion of incendiary, attention-grabbing posts. “If the core of our communications infrastructure is driven by targeted ads, we will have a toxic, conflict-driven communications sphere,” she said. She also argues that the company is too big and needs to be broken up through antitrust litigation.
This summer, I spoke with Zuckerberg over Zoom. He wore a Patagonia fleece and sat in a wood-panelled room in front of a large marble fireplace. He had been heavily involved in the board’s creation: editing documents, reading memos, reviewing possible members. “I don’t see any path for the company ever getting out of the business of having to make these judgments,” he told me. “But I do think that we can have additional oversight and additional institutions involved.” He hoped, he said, that the board would “hold us accountable for making sure that we actually get the decisions right and have a mechanism for overturning them when we don’t.”
He looked tired. He seemed more at ease talking about “product” or “building tools” than he did discussing ethics or politics. It struck me that he was essentially a coder who had found himself managing the world’s marketplace of ideas. “The core job of what we do is building products that help people connect and communicate,” he said. “It’s actually quite different from the work of governing a community.” He hoped to separate these jobs: there would be groups of people who built apps and products, and others—including Facebook’s policy team and now the board—who deliberated the thorny questions that came along with them. I brought up a speech he gave at Georgetown, in 2019, in which he noted that the board was personally important to him, because it helped him feel that, when he eventually left, he would be leaving the company in safe hands. “One day, I’m not going to be running the company,” he told me. “I would like to not be in the position, long term, of choosing between someone who either is more aligned with my moral view and values, or actually is more aligned with being able to build high-quality products.”
I asked what kinds of cases he hopes the board will take. “If I was them, I’d be wary of choosing something that was so charged right off the bat that it was immediately going to polarize the whole board, and people’s perception of the board, and society,” he told me. He knew that critics wished the board had more power: “This is certainly a big experiment. It’s certainly not as broad as everyone would like it to be, upfront, but I think there’s a path for getting there.” But he rejected the notion that it was a fig leaf. “I’m not setting this up to take pressure off me or the company in the near term,” he said. “The reason that I’m doing this is that I think, over the long term, if we build up a structure that people can trust, then that can help create legitimacy and create real oversight. But I think there is a real risk, if it gets too polarized too quickly, that it will never be able to blossom into that.”
In April, 2020, the board members met for the first time, over Zoom. Facebook employees cried and took a screenshot. “It was such a profound experience to see this thing take on a life of its own,” Heather Moore, who came to Facebook from a U.S. Attorney’s office, said. After that, board members attended training sessions, which included icebreakers and trust exercises; in one, they brought pictures that represented pivotal moments in their lives. “Whether we can get along well enough to disagree and stay on mission is crucial and quite unknown,” John Samples, a member who works at the Cato Institute, a libertarian think tank, told me. The group quickly came under intense public pressure to stand up to the company. In June, a nonprofit called Accountable Tech began targeting the board on Facebook with ads that included their photos and addressed them by name: “Pam Karlan: speak up or step down”; “Tell Michael McConnell: don’t be complicit.” Members often felt the need to assert their independence. The company assigned readings, some of which were, according to a board member, “just P.R. crap from Facebook,” and employees sat in on early meetings and mock deliberations. “We’re out of our mind if we’re in an oversight position and the people who are teaching us about what we’re overseeing are the people we’re meant to oversee,” the board member said. After complaints, Facebook employees stopped being invited to the meetings.
In October, Facebook began allowing appeals from a random five per cent of users, like a new Instagram feature, and the board’s jurisdiction was rolled out over the next month. Its docket included a post from an American user about Joseph Goebbels, the Nazi minister of propaganda, and one from a user in Myanmar claiming that there is “something wrong with Muslims psychologically.” Owono told me, “I never imagined I’d have to ask myself these kinds of hard questions so rapidly.” They reviewed the company’s Community Standards, a ten-thousand-word document that codifies Facebook’s speech policies, and consulted precedents in international human-rights law. One debate that has arisen among board members mirrors the division on the Supreme Court between “textualist” and “living” interpretations of the Constitution. Some believe that their job is to hew more closely to Facebook’s policies. “Our job is to ask, ‘What does the text mean?’ ” one member told me. “We don’t have much legitimacy if we just start making stuff up.” Others believe that they should use their power to push back against Facebook’s policies when they are harmful. Nicolas Suzor, a law professor from Australia, and a board member, told me, “I was worried we’d end up with decisions that were limited to the facts, but people are brave.”
In one of the board’s first cases, a user had posted photos and described them as showing churches in Baku, Azerbaijan, that had been razed as part of the ongoing persecution of Armenians in the region. He complained about “Azerbaijani aggression” and “vandalism,” and referred to Azerbaijanis using the word “taziki,” which literally means “washbowls” but is a play on a Russian slur. Facebook had taken down the post as hate speech, but some board members felt that it was strange to apply this rule to a complaint against a dominant group. The panel asked for a report from UNESCO, received a comment from the U.N. special rapporteur on minority issues, and another from a think tank in Ukraine, who told them that persecuted groups often used offensive language in their struggle for equality. “We learned that, during a conflict, it’s usually accepted that people would use harsh words, so there’s this idea that, especially when minority rights are at risk, there’s a custom to allow more harsh discourse,” a board member told me. “I’d never heard of that before, and I found it compelling.” In the end, they voted to take the post down, though not everyone agreed. The opinion suggested that a minority of the members “believed that Facebook’s action did not meet international standards and was not proportionate,” and that the company “should have considered other enforcement measures besides removal.”
In another case, someone in France had posted a video and accompanying text complaining that the government had refused to authorize a combination of azithromycin and hydroxychloroquine, an anti-malarial drug, as a treatment for COVID-19. Many on the right, including Trump and the French professor Didier Raoult, have claimed that hydroxychloroquine cures the illness, though the claim has been debunked, and scientists have warned that the medication can cause dangerous side effects. The user claimed that “Raoult’s cure” was being used elsewhere to save lives and posted the video in a public group with five hundred thousand members. Facebook worried that it might cause people to self-medicate, and removed it. According to one person on the board, members of the panel “who have lived in places that have had a lot of disinformation in terms of COVID-19” agreed with this decision, believing that, “in the midst of this huge pandemic affecting the entire world population, decisive measures may be adopted.” But others noted that the post was pressing for a policy change, and worried about censoring political discussions. “No matter how controversial it would seem to us, those questions and challenges are what helps scientific knowledge advance,” the board member said. They found that Facebook’s standard for censoring such speech, interpreted under international human-rights law, involved determining whether it was likely to incite direct harm. Because the combination of medicines was not available over the counter in France, they decided that the risk of causing people to self-administer was low. They voted to restore the post but encouraged the company to append a link to more reliable scientific information.
When the board was just three weeks old, Black Lives Matter protests were sweeping the country, and Trump posted on both Facebook and Twitter threatening to send in the military to subdue them, writing, “When the looting starts, the shooting starts.” His language echoed that of the segregationist George Wallace, who threatened civil-rights protesters in similar terms. Twitter flagged the tweet as violating its rules against “glorifying violence,” but Facebook left it unmarked. Zuckerberg released a statement saying, “I disagree strongly with how the President spoke about this, but I believe people should be able to see this for themselves.” In an interview on Fox News, he noted that he didn’t think the company should be the “arbiter of truth” on political issues. Angry employees staged a virtual walkout and raised the idea, in a leaked Q. & A., of letting the board hear the case. A few days after the incident, Suzor, the Australian law professor, suggested a full-board meeting. Users couldn’t appeal Facebook’s decision to the board—it hadn’t yet started taking cases, and the post was a keep-up—but it debated the issue nonetheless.
Several members were shocked by Trump’s threats and initially wanted to meet with Zuckerberg or release a statement condemning the platform’s decision. “I was furious about Zuck’s ‘arbiter of truth’ double-down,” one board member told me. Others felt that taking a partisan stand would alienate half the country and lose the board legitimacy. “Seventy-five million people voted for Trump,” Samples said. “What are you going to do about it?” The group discussed whether it should weigh in on matters outside its remit that are nevertheless of public importance. Jamal Greene, one of the co-chairs, told me, “The general sentiment was ‘no’ for right now, and maybe ‘no’ ever, but certainly not before we’re even doing the thing that we’re supposed to be doing.” After two hours of discussion, the members decided to stay mum. “Moralistic ranting is not going to make a difference,” Samples said. “Building up an institution that can slowly answer the hard questions? That might.”
They didn’t have much time for institution-building. On January 6th, a group of Trump supporters who disputed the results of the Presidential election stormed the Capitol, taking selfies, making threats, and attempting to disrupt the peaceful transition of power. Trump had urged on the mob by repeatedly claiming, on Facebook and elsewhere, that the election had been stolen from him. Hundreds of thousands of people had used the site to spread the claim, and to organize the rally at the Capitol. Afterward, Trump released a video tepidly disavowing violence and reiterating his claims of a fraudulent election. He tweeted, “These are the things and events that happen when a sacred landslide election victory is so unceremoniously & viciously stripped away from great patriots who have been badly & unfairly treated for so long.” Facebook removed two of Trump’s posts. The next morning, in a statement from his own Facebook, Zuckerberg announced an indefinite suspension of Trump’s account. “In this moment, the risk to our democracy was too big,” Sheryl Sandberg said, in an interview. “We felt we had to take the unprecedented step of what is an indefinite ban, and I’m glad we did.” The next day, Twitter permanently banned him.
Many felt that the decision was an important step. “The platforms failed in regulating the accounts, of course, since he was inciting violence, and they banned him for that only after egregious violence resulted,” Susan Benesch, the founding director of the Dangerous Speech Project, told me. “But banning him did lower his megaphone. It disrupted his ties with his large audience.” Others expressed concern that Facebook had wielded its power to silence a democratically elected leader. “The fifth most valuable corporation in the U.S., worth over seven hundred billion dollars, a near monopoly in its market niche, has restricted a political figure’s speech to his thirty million followers,” Eugene Volokh, a law professor at U.C.L.A., said. “Maybe that’s just fine. Maybe it’s even a public service. But it’s a remarkable power for any entity, public or private, to have.” Angela Merkel, the Chancellor of Germany, described Trump’s removal from Twitter as “problematic,” and Alexey Navalny, the Russian opposition leader, tweeted, “I think that the ban of Donald Trump on Twitter is an unacceptable act of censorship.”
In an interview, Sandberg noted that Trump could appeal the removal of his posts. But only Facebook had the power to refer his suspension to the board. In conversations with Facebook’s leadership, members of the governance team and the board’s trust argued that failing to bring the case before the board would undermine its legitimacy. Harris likened the board to Tinkerbell: “At the end of the day you can build all the things, but you just have to have enough people that believe in order to make it real.” Members seemed eager to take it on. One texted me, “If not us, who? And if not now, when?” Even Clegg, who had initially favored a slower rollout of the board, wanted it to hear the case. “As far as I’m concerned, this was a no-brainer,” Clegg told me. “Why wouldn’t you send it to the Oversight Board? If you didn’t, you’d be hobbling it right from the beginning.” The day after Joe Biden’s Inauguration, Facebook sent the case, asking the board to rule on whether Trump should remain indefinitely banned from the platform. Clegg added, when we spoke, that if the board can “answer also about political leaders in analogous situations, we’d be keen to hear.”
Board members found out that they were getting the case only a half hour before the public did. Members eagerly watched the board’s internal Web site to see if they had been selected for the panel. They will now have two more months to deliberate it. Civil-society groups like the Center for Democracy & Technology and R Street, a conservative think tank, are submitting comments on the case—the equivalent of filing amicus briefs—arguing for or against Trump’s reinstatement. Trump has the opportunity to submit a brief arguing why he should be reinstated. “The board’s Trump decision may affect the liberties and, yes, lives of hundreds of millions,” Samples told me. “Few U.S. Supreme Court cases ever hold such potential for good or for ill.” Ronaldo Lemos, a board member and law professor in Rio de Janeiro, told me that he believes the board has a lot of work ahead.
“The Oversight Board is not going away anytime soon,” he said. “We’re not going anywhere.”