content policy – Techdirt (original) (raw)
Elon Musk Has Got Content Moderation All Figured Out: Delete The “Wrong” And “Bad” Content, But Leave The Rest (And Reinstate Trump)
from the i-can't-believe-we're-all-doing-this dept
Look, we’ve tried to explain over and over again that Elon Musk doesn’t understand free speech or content moderation. He also seems entirely clueless about the incredible lengths that Twitter has gone to in order to actually protect free speech online (including fighting in court over it) and what it has done to deal with the impossible complexities of running an online platform. Every time he opens his mouth on the subject, he seems to make things worse, or further demonstrate his ridiculous, embarrassing levels of ignorance on the topic — such as endorsing the EU’s approach to platform regulation (something that Twitter has been fighting back against, because of its negative impact on speech).
The latest is that Musk continued his trend of speaking nonsense at a Financial Times conference, where he said that he would reinstate Donald Trump’s account.
“I do think it was not correct to ban Donald Trump, I think that was a mistake, because it alienated a large part of the country, and did not ultimately result in Donald Trump not having a voice,” Mr. Musk said at a Financial Times conference on Tuesday.
If you’re into that sort of punishment, you can watch the whole thing here. I just warn you that it’s an hour and twenty minutes of your life that you will never, ever get back.
Now, there are plenty of principled reasons to argue for why Trump should be reinstated to the platform. And there are plenty of principled reasons to argue for why he should be kept off of it. When the ban first happened, I wrote a long piece analyzing the decision, noting that it’s not, in any way, an easy call, but there are reasons you can argue both sides.
Later in the talk, Musk basically clarifies his point, repeating something he’s said before, that he basically does not like permanent “bans” but does support other forms of moderation, including deleting content or making it “invisible.” And, again, there is an argument for that as well — in fact, Jack Dorsey has said he has agreed, though in slightly different framing, noting that getting to the point that the company felt Trump needed to be banned represented a failure for Twitter, and reiterating why Twitter should be an implementation of a social media protocol, rather than a centralized hub. And, also, similarly, Facebook’s own Oversight Board questioned the permanent nature of the ban on that platform, and Facebook responded by saying that the ban would be reviewed every two years (though, I’m realizing that two years passed earlier this year, and I don’t recall any commentary on that…).
So, again, there is some level of reasoning behind moving away from bans. But, Musk’s position again appears to be not based on any principled argument, or understanding of what actually happened, but just random thoughts firing through his head. He continues to (falsely) claim that Twitter’s moderation is biased in favor of “leftists” (evidence points in the other direction, but details, details…). The fact that he says the banning of Trump “alienated a large part of the country” leaves out the fact that Trump himself alienated a large part of the country, and returning him to Twitter would do the same. But, oddly, Musk doesn’t seem to care about alienating those people.
His other point, that it “_did not ultimately result in Donald Trump not having a voice_” is just… weird? No one ever argued that Twitter removed Trump to stop him from “having a voice.” Indeed, part of the argument many of us made that one reason why it’s not so bad that he was removed was because he still had the ability to speak out in lots of other places including (these days) on his own Twitter-wannabe. All the removal was doing was saying that Twitter did not want him directly using their site to cause more havoc.
Even more ridiculous though, is that Musk then went on to talk about, hell, let’s call it, his content moderation “philosophy.”
“If there are tweets that are wrong and bad, those should be either deleted or made invisible, and a suspension, a temporary suspension is appropriate but not a permanent ban.”
Wrong and bad, huh. I am reminded of what Facebook’s earliest content moderators said was the initial policy at that company, when it was all much smaller: “does this make us feel icky?” But they learned, almost immediately that such a setup does not scale, not even slightly.
It’s also just inherently and obviously ridiculous. “Wrong” and “bad” are just fundamentally subjective terms. Again, this is a point that we’ve raised before: lots of social media companies start off with this kind of simplistic view of content moderation. They say they want free speech to be the touchstone, and that they will only have to push back on the most extreme cases. But what they (and Elon) don’t seem to grasp is that there are way more challenging cases than you can predict, and there is no easy standard that you can set up for “wrong” or “bad.”
Then, as you’re (in theory) trying to scale, you realize that you need to set policies with standards for what constitutes “wrong” and “bad.” It can’t be left up to Elon to decide every one. And from there you quickly learn that for every policy you write, you’ll quickly find way more “edge” cases than you can imagine. And, on top of that, you’ll find that if you have ten different people comparing the edge case to the policy, you may get ten different answers of how to apply it.
And, again, this is actually one thing that Twitter has spent years thinking about: how do you operationalize a set of policies and a set of enforcements to make them as consistent and as reasonable as possible. And you can’t just simply look at it say “bad stuff goes, good stuff stays” because that’s just nonsense and not any way to set up an actual policy.
If he wants to bring back Trump, that’s certainly his call. Trump has claimed he wouldn’t come back, even if Elon lets him back on, but then again, he’s technically still suing Twitter to force the company to let him back on (the judge just dismissed the case, but has left it open for Trump to file an amended complaint, so the case is not yet officially closed).
But Musk is being ridiculously unfair to pretend (as a bunch of Trumpist propagandists have for years) that the decision to ban Trump was because of some “leftist ideology” and an attempt to silence his voice. It was the culmination of a very long series of events, including multiple other types of interventions, including trying to fact check his false claims and limit the spread of them (things you’d think that Musk would appreciate), but which failed to stop Trump from seeking to use the platform to egg on violence that was part of an effort to overturn the results of a free and fair election.
That Musk keeps insisting that democratic values are so important (saying elsewhere that he’d want to follow speech laws, since they represent the will of the people), you’d think he’d recognize that efforts to overturn an election might, well, raise some questions. It did for the people inside Twitter, who thought deeply about it and argued back and forth how to handle this. And that discussion and debate was a lot more serious and deserves more credit than Musk gives it.
At this point, though, it’s clear that Musk’s view of the world is simplistic and child-like. And that seems unlikely to change. Given how we’ve seen this play out on other websites, I don’t imagine it will be good for long term business, but it’s not my billions on the line.
Filed Under: bans, content moderation, content policy, donald trump, elon musk
Companies: twitter
The Need For A Robust Critical Community In Content Policy
from the it's-coming-one-way-or-the-other dept
Over this series of policy posts, I?m exploring the evolution of internet regulation from my perspective as an advocate for constructive reform. It is my goal in these posts to unpack the movement towards regulatory change and to offer some creative ideas that may help to catalyze further substantive discussion. In that vein, this post focuses on the need for “critical community” in content policy — a loose network of civil society organizations, industry professionals, and policymakers with subject matter expertise and independence to opine on the policies and practices of platforms that serve as intermediaries for user communications and content online. And to feed and vitalize that community, we need better and more consistent transparency into those policies and practices, particularly intentional harm mitigation efforts.
The techlash dynamic is seen in both political parties in the United States as well as in a broad range of political viewpoints globally. One reason for the robustness of the response is that so much of the internet ecosystem feels like a black box, thus undermining trust and agency. One of my persistent refrains in the context of artificial intelligence, where the ?black box? feeling is particularly strong, is that trust can?t be restored by any law or improved corporate practice operating in isolation. (And certainly, the answer isn?t just “community psychology and social justice contexts. For example, this talk by Professor Silvia Bettez offers a specific definition of critical community as “interconnected, porously bordered, shifting webs of people who through dialogue, active listening, and critical question posing, assist each other in critically thinking through issues of power, oppression,and privilege.” While in the field of internet policy the issues are different, the themes of power, oppression, and privilege strike me as resonant in the context of social media platform practices.
I wrote an early version of this community-centric theory of change in a piece last year focused specifically on recommendation engines. In that piece, I looked at the world of privacy, where, over the past few decades, a seed of transparency offered voluntarily in the form of privacy policies helped to fuel the growth of a professional community of privacy specialists who are now able to provide meaningful feedback to companies, both positive and critical. We have a rich ecosystem in privacy with institutions ranging from IAPP to the Future of Privacy Forum to EPIC.
The tech industry has a nascent ecosystem built around specifically content moderation practices, which I tend to think of as a (large) subset of content policy focused specifically on moderation — policies regarding the permissible use of a platform and actions taken to enforce those policies for specific users or pieces of content. (The biggest part of content policy not included within my framing of content moderation is the work of recommendation engines to filter information and present users with an intentional experience.) The Santa Clara Principles and extensive academic research have helped to advance norms around moderation. The new Trust & Safety Professionals Association could evolve into a IAPP or FPF equivalent. Content moderation was the second Techdirt Greenhouse topic after privacy, reflecting the diversity of voices in this space. And plenty of interesting work is being done beyond the moderation space as well, such as Mozilla?s “YouTube Regrets” campaign, to illustrate online harm arising from recommendation engines steering permissible and legal content to poorly chosen audiences.
As the critical community around content policy grows, regulation races ahead. The Digital Services Act consultation submissions closed this month; here?s my former team?s post about that. The regulatory posture of the European Commission has advanced a great deal over the past couple of years, shifting toward a paradigm of accountability and a focus on processes and procedures. The DSA will prove to be a turning point on a global scale, just as the GDPR was for privacy. Going forward, platforms will expect to be held accountable. Just as it?s increasingly untenable to assume that an internet company can collect data and monetize it at will, so, too will it be untenable to dismiss harms online through tropes like ?more speech is a solution to bad speech.? While the First Amendment court challenges in the U.S. legal context will be serious and difficult to navigate, the normative reality will more and more be set: tech companies must confront and respond to the real harms of hate speech, as Brandi Collins-Dexter?s Greenhouse post so well illustrates.
The DSA has a few years left in its process. The European Commission must adopt a draft law, the Parliament will table hundreds of amendments and put together a final package for vote, the Council will produce its own version, trialogue will hash out a single document, and then, finally, Parliament will vote again — a vote that might not succeed, restarting some portions of the process. Yet, even at this early stage, it seems virtually certain that the DSA legislative process will produce a strong set of principles-based requirements without specific guidance for implementing practices. To many, such an outcome seems vague and hard to work with. But it?s preferable in many ways to specifying technical or business practices in law which can easily result in outdated and insufficient guidance to address evolving harm, not to mention restrictions that are easier for large companies to comply with, at least facially, than smaller firms.
So, there?s a gap here. It?s the same gap seen in the PACT Act. As both a practical consideration in the context of American constitutional law and in the state of collective understanding of policy best practices, the PACT Act doesn?t specify exactly what practices need to be adopted. Rather, it requires transparency and accountability to those self asserted practices. The internet polity needs something broader than just a statute to determine what ?good? means in the context of intermediary management of user-generated content.
Ultimately, that gap will be filled by the critical community in content policy, working collectively to develop norms and provide answers to questions that often seem impossible to answer. Trust will be strongest, and the norms and decisions that emerge the most robust and sustainable, if that community is diverse, well resourced, and with broad and deep expertise.
The impact of critical community on platform behavior will depend on two factors: first, the receptivity of powerful tech companies to outside pressure, and second, sufficient transparency into platform practices to enable timely and informed substantive criticism. Neither of these should be assumed, particularly with respect to harm occurring outside the United States. Two Techdirt Greenhouse pieces (by Aye Min Thant and Michael Karanicolas) and the recent Buzzfeed Facebook expose illustrate the limitations of both transparency and influence to shape international platform practices.
I expect legal developments to help strengthen both of these. Transparency is a key component of the developing frameworks for both the DSA and thoughtful Section 230 reform efforts like the PACT Act. While it may seem like low-hanging fruit, the ability of transparency to support critical community is of great long-term strategic importance. And the legal act of empowering of a governmental agency to adopt and enforce rules going forward will, hopefully, help create incentives for companies to take outside input very seriously (the popular metaphor here is to the ?sword of Damocles?).
We built an effective critical community around privacy long ago. We?ve been building it on cybersecurity for 20+ years. We built it in telecom around net neutrality over the past ~15 years. The pieces of a critical community for content policy are there, and what seems most needed right now to complete the puzzle is regulatory ambition driving greater transparency by platforms along with sufficient funding for coordinated, constructive, and sustained engagement.
Filed Under: civil society, content policy, critical community, internet regulation, policy, reform