Facebook’s moderation is of public interest. It should be public knowledge. (original) (raw)
Sign up for The Media Today, CJR’s daily newsletter.
When Facebook announced two weeks ago that it would hire an extra 3,000 moderators to review livestreamed video, many wondered what moderating the vast platform might actually look like. This week, The Guardian exclusively obtained documents that allow a more detailed look inside the Facebook moderation processes than ever before. It is clear from the documents that Facebook, like all digital publishers, struggles with the quality of discourse published through its platform.
The most compelling part of _The Guardian_’s report highlights how tortured the decisions are about deleting or leaving a post. A section of a slide deck that appeared to be aimed at training moderators in one of its moderation hubs in the Philippines described the difference between threats of credible violence, which should be deleted, and non-credible threats, which can be left alone. “Someone shoot Trump” is not okay, while “Kick a person with red hair,” can stay. The training materials themselves are unfortunately designed: Red crosses denote posts that ought to be deleted, while green check marks highlight acceptable material. This visual cue makes one instantly doubt Facebook’s judgment about how best to edit and present content.
News organizations that host their own communities or comment boards are as immersed in the problem of achieving the right balance as Facebook. In recent years, many have in fact given up hosting comment boards, or passed the task over to social media companies, principally Facebook.
The Coral Project, a collaborative project that works with news organizations to improve community interactions such as commenting, studies what makes for better standards. As Project Lead Andrew Losowsky points out, the analogy with Facebook only stretches so far: “Hosting a smaller community on a particular topic in a space you control is a very different case from the world’s largest social network,” says Losowsky. “The keys are to make clear what the goal of the community is, what the boundaries are (which can still be tricky to police), what the consequences are for breaking them, and make sure it is enforced, with a clearly defined appeals process. If you run your own community, your obligations are different.”
RELATED: Facebook is eating the world
Facebook does not have the luxury of the “our house, our rules” approach of smaller communities. The sweeping objectives common in Silicon Valley mission statements actively militate against consistent moderation: “This is one rare area where, by claiming to be an open platform for expression, social media has it harder than everyone else,” says Losowsky. The roots of moderation at Facebook lie with the policy and legal departments, and therefore are heavily inflected by the American ideal of free speech as defined by First Amendment law. As a result, Facebook, perhaps even more than news organizations, is wedded to free speech principles.
David Levesley, who worked as a content curator for Facebook and is now the social media editor at i, a British newspaper, does not see Facebook as being the principal problem when it comes to graphic and difficult material. Nevertheless, Facebook wanting to be “the printing press and the book that’s printed and the library the books are held in and the houses of the people who go to the library,” as Levesley puts it, creates a problem not just of scale but of governance. “I don’t think Facebook can solely arbitrate,” Levesley says. “I don’t want companies developing sets of laws for them and themselves alone.”
Legally, Facebook is not obliged to moderate what appears on its platform at all. In fact, it didn’t have any policy or moderation team looking at content until 2009. A recent paper from legal scholar Kate Klonick is perhaps the best guide to how Facebook built moderation teams and what it means. In “The New Governors: The People, Rules and Processes Governing Online Speech,” Klonick describes how, despite technology platforms being protected from publishing liability by Section 230 of the Communications Decency Act in the US, many, including Facebook, built internal moderation systems to help protect their businesses. Over time, internal rules also flex in response to external pressures.
“Platforms create rules and systems to curate speech out of a sense of corporate social responsibility, but also, more importantly, because their economic viability depends on meeting user’s speech and community norms,” writes Klonick.
ICYMI: A hidden message in memo justifying Comey’s firing
As Klonick notes, Facebook progressed from a company that employed 12 moderators in 2009 to one that outsources thousands of moderation jobs around the world to review over a million pieces of content a day. This reflects the speed of growth in content on the platform, but also the fluid nature of public expectations and social norms.
The desire of technology companies to keep their processes, algorithms, staffing, and policies private is one of the stumbling blocks to understanding how Facebook operates moderation. Equally, a lack of transparency has often made Facebook’s public relations problem worse than it needs to be.
Margaret Sullivan, media correspondent at The Washington Post and former public editor at The New York Times, has more personal experience than most in the trials and benefits of being open about how content is made and disseminated. “A good start would be to admit there’s a problem, to admit that they are a media company, and to stop using the bland language of denial like ‘building better tools to keep our community safe,’” says Sullivan. “Transparency is so often the beginning of the right answer on the internet. But we don’t see much of that with Facebook.”
Klonick’s conclusion is similar: “There is much work to be done in continuing to accurately understanding [sic] the infrastructure and motivations of these platforms, but recognizing the realities and practicalities of the governance system in which they are moderating speech is our best hope for protecting it.”
TRENDING: The man who Trump may pick to run the FBI would be a First Amendment disaster
We are fascinated by _The Guardian_’s disclosures and what they say about Facebook’s moderation policies because, rightly or wrongly, the company is identified as being one of the key arbiters of public speech. Academic lawyers such as David Post and Yochai Benkler have written about the idea of competing networks and marketplaces of ideas and even rules that users pick between. It is certainly the case that free expression on Reddit is markedly different from the experience on Facebook, which in turn is different from Twitter or YouTube. But with two billion users and very few other players of size, the marketplace of rules and ideas envisioned by Post, Benkler, and others appears to be failing.
Facebook knows far more about the development of social norms in speech than any other entity, including governments. Its mechanisms for deciding how to arbitrate standards of speech are of general public interest, and the shaping of those policies might be more effective if done through collective knowledge and debate. Just as Facebook has found it hard to hold the line on the idea that it does not have routine publishing responsibilities, it will find it hard equally to convince the public that its moderation standards and practices are not better off developed in the open.
TRENDING: We tested paywalls at NYTimes, WSJ& WashPo and more_._ All of them were pretty leaky, except one.
Has America ever needed a media defender more than now? Help us by joining CJR today.
Emily Bell is a frequent CJR contributor and the director of Columbia’s Tow Center for Digital Journalism. Previously, she oversaw digital publishing at The Guardian.