substack – Techdirt (original) (raw)
Substack Realizes Maybe It Doesn’t Want To Help Literal Nazis Make Money After All (But Only Literal Nazis)
from the you-don't-have-to-hand-it-to-the-nazis dept
Last year, soon after Elon completed his purchase of (then) Twitter, I wrote up a 20 level “speed run” of the content moderation learning curve. It seems like maybe some of the folks at Substack should be reading it these days?
As you’ll recall, last April, Substack CEO Chris Best basically made it clear that his site would not moderate Nazis. As I noted at the time, any site (in the US) is free to make that decision, but those making it shouldn’t pretend that it’s based on any principles, because the end result is likely to be that you have a site full of Nazis and… that tends not to be good for business because other people you might want to do business with might not want to be on the site welcoming Nazis.
Thus, it should not have been shocking when, by the end of the year, Substack had a site with a bunch of literal Nazis. And, no, we’re not just talking about people with strong political viewpoints that lead people who oppose them to call them Nazis. We’re talking about people who are literally embracing Naziism and Nazi symbols.
And Substack was helping them make money.
Even worse, Substack co-founder Hamish McKenzie put out a ridiculous self-serving statement pretending that their decision to help monetize Nazis was about civil liberties, even as the site regularly deplatformed anything about sex. At that point, you’re admitting that you moderate, and then it’s just a question over which values you moderate for. McKenzie was claiming, directly, that they were cool with Nazis, but sex was bad.
The point of the content moderation learning curve is not to say that there’s a right way or a wrong way to handle moderation. It’s just noting that if you run a platform that allows users to speak, you have to make certain calls on what speech you’re going to allow and what you’re not going to allow — and you should understand that some of those choices have consequences.
In the case of Substack, some of those consequences were that some large Substack sites decided to jump ship. Rusty Foster’s always excellent “Today in Tabs” switched over to Beehiiv. And then, last week, Platformer News, Casey Newton’s widely respected newsletter with over 170,000 subscribers, announced that if Substack refused to remove the Nazi sites, it would leave.
Content moderation often involves difficult trade-offs, but this is not one of those cases. Rolling out a welcome mat for Nazis is, to put it mildly, inconsistent with our values here at Platformer. We have shared this in private discussions with Substack and are scheduled to meet with the company later this week to advocate for change.
Meanwhile, we’re now building a database of extremist Substacks. Katz kindly agreed to share with us a full list of the extremist publications he reviewed prior to publishing his article, most of which were not named in the piece. We’re currently reviewing them to get a sense of how many accounts are active, monetized, display Nazi imagery, or use genocidal rhetoric.
We plan to share our findings both with Substack and, if necessary, its payments processor, Stripe. Stripe’s terms prohibit its service from being used by “any business or organization that a. engages in, encourages, promotes or celebrates unlawful violence or physical harm to persons or property, or b. engages in, encourages, promotes or celebrates unlawful violence toward any group based on race, religion, disability, gender, sexual orientation, national origin, or any other immutable characteristic.”
It is our hope that Substack will reverse course and remove all pro-Nazi material under its existing anti-hate policies. If it chooses not to, we will plan to leave the platform.
As a result of those meetings, Substack has now admitted that some of the outright Nazis actually do violate “existing” rules, and will be removed.
Substack is removing some publications that express support for Nazis, the company said today. The company said this did not represent a reversal of its previous stance, but rather the result of reconsidering how it interprets its existing policies.
As part of the move, the company is also terminating the accounts of several publications that endorse Nazi ideology and that Platformer flagged to the company for review last week.
The company will not change the text of its content policy, it says, and its new policy interpretation will not include proactively removing content related to neo-Nazis and far-right extremism. But Substack will continue to remove any material that includes “credible threats of physical harm,” it said.
As law professor James Grimmelann writes in response: “As content moderation strategies go, “We didn’t realize until now that the Nazis on our platform were inciting violence” perhaps raises more questions than it answers.”
Molly White, who remains one of the best critics of tech-boosterism, also noted that Substack’s decisions seemed likely to piss off the most people possible, by first coddling the Nazis (pissing off most people who hate Nazis), and then pissing off the people who cheered on the “we don’t moderate Nazis.”
In the end, Substack is apparently removing five Nazi newsletters. As White notes, this will piss off the most people possible. The people who want Substack to do more won’t be satisfied and will be annoyed it took pointing out the literal support for genocide for Substack to realize that maybe they don’t want literal Nazis. And the people who supported Substack will be annoyed that Substack was “pressured” into removing these accounts.
Again, there are important points in all of this, and it’s why I started this post off by pointing to the speed run post at the beginning. You can create a site and say you’ll host whatever kinds of content you want. You can create a site and say that you won’t do any moderation at all. Those are valid decisions to make.
But they’re not decisions that are in support of “free speech.” Because a site that caters to Nazis is not a site that caters to free speech. Because (as we’ve seen time and time again), such sites drive away people who don’t like being on a site associated with Nazis. And, so you’re left in a situation where you’re really just supporting Nazis and not much else.
Furthermore, for all of McKenzie’s pretend high-minded talk about “civil liberties” and “freedom,” it’s now come out that he had no problem at all trying to put his fingers on the scale to put together a list of (mostly) nonsense peddlers to sign a letter in support of his own views. McKenzie literally organized the “we support Substack supporting Nazis” letter signing campaign. Which, again, he’s totally allowed to do, but it calls into question his claimed neutrality in all of this. He’s not setting up a “neutral” site to host speech. He’s created a site that hosts some speech and doesn’t host other speech. It promotes some speech, and doesn’t promote other speech.
Those are all choices, and they have nothing to do with supporting free speech.
Running a private website is all about tradeoffs. You have to make lots of choices, and those choices are difficult and are guaranteed to piss off many, many people (no matter what you do). For what it’s worth, this is still why I think a protocol-based solution should beat a centralized solution every time, because with protocols you can setup a variety of approaches and let people figure out what works best, rather than relying on one centralized system.
Substack is apparently realizing that there were some tradeoffs to openly supporting Naziism, and will finally take some action on that. It won’t satisfy most people, and now it’s likely to piss off the people who were excited about Nazis on Substack. But, hey, it’s one more level up on the content moderation speed run.
Filed Under: content moderation, nazi bar, nazis
Companies: platformer, substack
Substack Turns On Its ‘Nazis Welcome!’ Sign
from the your-reputation-is-what-you-allow dept
Back in April Substack founder/CEO Chris Best gave an interview to Nilay Patel in which he refused to answer some fairly basic questions about how the company planned to handle trust & safety issues on their new Substack Notes microblogging service. As I noted at the time, Best seemed somewhat confused about how all this worked, and by refusing to be explicit in their policies he was implicitly saying that Substack welcomed Nazis. As we noted, this was the classic “Nazi bar” scenario: if you’re not kicking out Nazis, you get the reputation as “the Nazi bar” even if you, yourself, don’t like Nazis.
What I tried to make clear in that post (which some people misread) was that the main issue I had was Best trying to act as if his refusal to make a statement wasn’t a statement. As I noted, if you’re going to welcome Nazis to a private platform, don’t pretend you’re not doing that. Be explicit about it. Here’s what I said at the time:
If you’re not going to moderate, and you don’t care that the biggest draws on your platform are pure nonsense peddlers preying on the most gullible people to get their subscriptions, fucking own it, Chris.
Say it. Say that you’re the Nazi bar and you’re proud of it.
Say “we believe that writers on our platform can publish anything they want, no matter how ridiculous, or hateful, or wrong.” Don’t hide from the question. You claim you’re enabling free speech, so own it. Don’t hide behind some lofty goals about “freedom of the press” when you’re really enabling “freedom of the grifters.”
You have every right to allow that on your platform. But the whole point of everyone eventually coming to terms with the content moderation learning curve, and the fact that private businesses are private and not the government, is that what you allow on your platform is what sticks to you. It’s your reputation at play.
And your reputation when you refuse to moderate is not “the grand enabler of free speech.” Because it’s the internet itself that is the grand enabler of free speech. When you’re a private centralized company and you don’t deal with hateful content on your site, you’re the Nazi bar.
Most companies that want to get large enough recognize that playing to the grifters and the nonsense peddlers works for a limited amount of time, before you get the Nazi bar reputation, and your growth is limited. And, in the US, you’re legally allowed to become the Nazi bar, but you should at least embrace that, and not pretend you have some grand principled strategy.
The key point: your reputation as a private site is what you allow. If you allow garbage, you’re a garbage site. If you allow Nazis, you’re a Nazi site. You’re absolutely allowed to do that, but you shouldn’t pretend to be something that you’re not. You should own it, and say “these are our policies, and we realize what our reputation is.”
Substack has finally, sorta, done that. But, again, in the dumbest way possible.
A few weeks back, the Atlantic ran an article by Jonathan Katz with the headline Substack Has a Nazi Problem. In what should be no surprise given what happened earlier this year with Best’s interview, the Nazis very quickly realized that Substack was a welcome home for them:
An informal search of the Substack website and of extremist Telegram channels that circulate Substack posts turns up scores of white-supremacist, neo-Confederate, and explicitly Nazi newsletters on Substack—many of them apparently started in the past year. These are, to be sure, a tiny fraction of the newsletters on a site that had more than 17,000 paid writers as of March, according to Axios, and has many other writers who do not charge for their work. But to overlook white-nationalist newsletters on Substack as marginal or harmless would be a mistake.
At least 16 of the newsletters that I reviewed have overt Nazi symbols, including the swastika and the sonnenrad, in their logos or in prominent graphics. Andkon’s Reich Press, for example, calls itself “a National Socialist newsletter”; its logo shows Nazi banners on Berlin’s Brandenburg Gate, and one recent post features a racist caricature of a Chinese person. A Substack called White-Papers, bearing the tagline “Your pro-White policy destination,” is one of several that openly promote the “Great Replacement” conspiracy theory that inspired deadly mass shootings at a Pittsburgh, Pennsylvania, synagogue; two Christchurch, New Zealand, mosques; an El Paso, Texas, Walmart; and a Buffalo, New York, supermarket. Other newsletters make prominent references to the “Jewish Question.” Several are run by nationally prominent white nationalists; at least four are run by organizers of the 2017 “Unite the Right” rally in Charlottesville, Virginia—including the rally’s most notorious organizer, Richard Spencer.
Some Substack newsletters by Nazis and white nationalists have thousands or tens of thousands of subscribers, making the platform a new and valuable tool for creating mailing lists for the far right. And many accept paid subscriptions through Substack, seemingly flouting terms of service that ban attempts to “publish content or fund initiatives that incite violence based on protected classes.” Several, including Spencer’s, sport official Substack “bestseller” badges, indicating that they have at a minimum hundreds of paying subscribers. A subscription to the newsletter that Spencer edits and writes for costs 9amonthor9 a month or 9amonthor90 a year, which suggests that he and his co-writers are grossing at least $9,000 a year and potentially many times that. Substack, which takes a 10 percent cut of subscription revenue, makes money when readers pay for Nazi newsletters.
Again, none of this should be surprising. If you signal publicly that you allow Nazis (and allow them to make money), don’t be surprised when the Nazis arrive. In droves. Your reputation is what you allow.
And, of course, once that happens some other users might realize they don’t want to support the platform that supports Nazis. So a bunch of Substackers got together and sent a group letter saying they didn’t want to be on a site supporting Nazis and wanted to know what the Substack founders had to say for themselves.
From our perspective as Substack publishers, it is unfathomable that someone with a swastika avatar, who writes about “The Jewish question,” or who promotes Great Replacement Theory, could be given the tools to succeed on your platform. And yet you’ve been unable to adequately explain your position.
In the past you have defended your decision to platform bigotry by saying you “make decisions based on principles not PR” and “will stick to our hands-off approach to content moderation.” But there’s a difference between a hands-off approach and putting your thumb on the scale. We know you moderate some content, including spam sites and newsletters written by sex workers. Why do you choose to promote and allow the monetization of sites that traffic in white nationalism?
Eventually, the Substack founders had to respond. They couldn’t stare off into the distance like Best did during the Nilay Patel interview in April. So another founder, Hamish McKenzie, finally published a Note saying “yes, we allow Nazis and we’re not going to stop.” Of course, as is too often the case on these things, he tried to couch it as a principled stance:
I just want to make it clear that we don’t like Nazis either—we wish no-one held those views. But some people do hold those and other extreme views. Given that, we don’t think that censorship (including through demonetizing publications) makes the problem go away—in fact, it makes it worse.
We believe that supporting individual rights and civil liberties while subjecting ideas to open discourse is the best way to strip bad ideas of their power. We are committed to upholding and protecting freedom of expression, even when it hurts. As @Ted Gioia has noted, history shows that censorship is most potently used by the powerful to silence the powerless. (Ted’s note: substack.com/profile/4937458-ted-gioia/…)
Our content guidelines do have narrowly defined proscriptions, including a clause that prohibits incitements to violence. We will continue to actively enforce those rules while offering tools that let readers curate their own experiences and opt in to their preferred communities. Beyond that, we will stick to our decentralized approach to content moderation, which gives power to readers and writers.
So this is, more or less, what I had asked them to do back in April. If you’re going to host Nazis just say “yes, we host Nazis.” And, I even think it’s fair to say that you’re doing that because you don’t think that moderation does anything valuable, and certainly doesn’t stop people from being Nazis. And, furthermore, I also think Substack is correct that its platform is slightly more decentralized than systems like ExTwitter or Facebook, where content mixes around and gets promoted. Since most of Substack is individual newsletters and their underlying communities, it’s more equivalent to Reddit, where the “moderation” questions are pushed further to the edges: you have some moderation that is centralized from the company, some that is just handled by people deciding whether or not to subscribe to certain Substacks (or subreddits), and some that is decided by the owner of each Substack (or moderators of each subreddit).
And Hamish and crew are also not wrong that censorship is frequently used by the powerful to silence the powerless. This is why we are constantly fighting for free speech rights here, and against attempts to change that, because we know how frequently those rights are abused.
But the Substack team is mixing up “free speech rights” — which involve what the government can limit — with their own expressive rights and their own reputation. I don’t support laws that stop Nazis from saying what they want to say, but that doesn’t mean I allow Nazis to put signs on my front lawn. This is the key fundamental issue anyone discussing free speech has to understand. There is a vast and important difference between (1) the government passing laws that stifle speech and (2) private property owners deciding whether or not they wish to help others, including terrible people, speak.
Because, as private property owners, you have your own free speech rights in the rights of association. So while I support the rights of Nazis to speak, that does not mean I’m going to assist them in using my property to speak, or assist them in making money.
Substack has chosen otherwise. They are saying that they will not just allow Nazis to use their property, but they will help fund those Nazis.
That’s a choice. And it’s a choice that should impact Substack’s own reputation.
Ken “Popehat” White explained it well in his own (yes, Substack) post on all of this.
First, McKenzie’s post consistently blurs the roles and functions of the state and the individual. For instance, he pushes the hoary trope that censoring Nazis just drives them underground where they are more dangerous: “But some people do hold those and other extreme views. Given that, we don’t think that censorship (including through demonetizing publications) makes the problem go away—in fact, it makes it worse.” That may be true for the state, but is it really true for private actors? Do I make the Nazi problem worse by blocking Nazis who appear in my comments? Does a particular social media platform make Nazis worse by deciding that they, personally, are not going to host Nazis? How do you argue that, when there are a vast array of places for Nazis to post on the internet? Has Gab fallen? Is Truth Social no more?
McKenzie continues the blurring by suggesting that being platformed by private actors is a civil right: “We believe that supporting individual rights and civil liberties while subjecting ideas to open discourse is the best way to strip bad ideas of their power. We are committed to upholding and protecting freedom of expression, even when it hurts.” That’s fine, but nobody has the individual right, civil liberty, or freedom of expression to be on Substack if Substack doesn’t want them there. In fact that’s part of Substack’s freedom of expression and civil liberties — to build the type of community it wants, that expresses its values. If Substack’s values is “we publish everybody” (sort of, as noted below) that’s their right, but a different approach doesn’t reflect a lack of support for freedom of expression. McKenzie is begging the question — assuming his premise that support of freedom of expression requires Substack to accept Nazis, not just for the government to refrain from suppressing Nazis.
As Ken further notes, Substack’s own terms of service and the moderation they already do does already block plenty of 1st Amendment protected speech, including hate speech, sexually explicit content, doxxing, and spam. There are good reasons that a site might block any of that speech, but it then stands out when you decide to say “but, whoa whoa whoa, Nazis, that’s a step too far, and an offense to free speech.” It’s all about choices.
Your reputation is what you allow. And Substack has decided that its reputation is “sex is bad, but Nazis are great.”
Or, as White notes:
My point is not that any of these policies is objectionable. But, like the old joke goes, we’ve established what Substack is, now we’re just haggling over the price. Substack is engaging in transparent puffery when it brands itself as permitting offensive speech because the best way to handle offensive speech is to put it all out there to discuss. It’s simply not true. Substack has made a series of value judgments about which speech to permit and which speech not to permit. Substack would like you to believe that making judgments about content “for the sole purpose of sexual gratification,” or content promoting anorexia, is different than making judgment about Nazi content. In fact, that’s not a neutral, value-free choice. It’s a valued judgment by a platform that brands itself as not making valued judgments. Substack has decided that Nazis are okay and porn and doxxing isn’t. The fact that Substack is engaging in a common form of free-speech puffery offered by platforms doesn’t make it true.
And this is exactly the argument that we keep trying to make and have been trying to make for years about content moderation questions. Supporting free speech has to mean supporting free speech against government attempts at suppression and also supporting the right of private platforms to make their own decisions about to allow and what not to allow. Because if you say that private platforms must allow all speech, then you don’t actually get more speech. You get a lot less. Because most platforms will decide they don’t want to be enabling Nazis, and only the ones who eagerly cater to Nazis survive. That leaves fewer places to speak, and fewer people willing to speak in places adjacent to Nazis.
Substack has every right to make the choices it has made, but it shouldn’t pretend that it’s standing up for civil rights or freedoms, because it’s not. It’s making value judgments that everyone can see, and its value judgment is “Nazis are welcome, sex workers aren’t.”
Your reputation is what you allow. Substack has hung out its shingle saying “Nazis welcome.”
Everyone else who uses the platform now gets to decide whether or not they wish to support the site that facilitates the funding of Nazis. Some will. Some will find the tradeoffs acceptable. But others won’t. I’ve already seen a few prominent Substack writers announce that they have moved or that they’re intending to do so.
These are all free speech decisions as well. Substack has made its decision. Substack has declared what its reputation is going to be. I support the company’s free speech rights to make that choice. But that does not mean I need to support the platform personally.
Your reputation is what you allow and Substack has chosen to support Nazis.
Filed Under: chris best, content moderation, free speech, hamish mckenzie, hate speech, nazi bar, nazis, reputation, trust and safety
Companies: substack
Elon Musk, Once Again, Tries To Throttle Links To Sites He Dislikes
Elon Musk’s commitment to free speech and the free exchange of ideas has always been been a joke. Despite his repeated claims to being a “free speech absolutist,” and promising that his critics and rivals alike would be encouraged to remain on exTwitter, he has consistently shown that he has a ridiculously thin skin, and a quick trigger response to try to remove, suppress, or silence those he dislikes.
In the past, we’ve talked about his efforts to ban links to platforms he was scared of, or the banning of links to platforms which he felt were unfairly competing with exTwitter, or how he would ban journalists if they annoyed him, or the banning of accounts of critics he had promised just weeks earlier to leave on the platform.
Basically, Musk has made it clear that he views content moderation as a tool to get back at whoever displeases him. The latest, as first revealed by the Washington Post, is that exTwitter is using the t.co shortcode links that Twitter control (and which it routes all links on the platform through) to throttle any links to certain sites, including the NY Times and Reuters, as well as social media operations he’s scared of, including Instagram, Facebook, Substack and Bluesky.
The company formerly known as Twitter has begun slowing the speed with which users can access links to the New York Times, Facebook and other news organizations and online competitors, a move that appears targeted at companies that have drawn the ire of owner Elon Musk.
It’s a weird kind of throttling, first noticed by someone on Hacker News, noting that if you clicked on any of the disfavored URLs, you’d get a 5 second throttle delay. As that user explained:
Twitter won’t ban domains they don’t like but will waste your time if you visit them.
I’ve been tracking the NYT delay ever since it was added (8/4, roughly noon Pacific time), and the delay is so consistent it’s obviously deliberate.
The NY Times itself confirmed this as well. However, that report also noted that after the Washington Post story started making the rounds, the throttle suddenly started to disappear.
The slowness, known in tech parlance as “throttling,” initially affected rival social networks including Facebook, Bluesky and Instagram, as well as the newsletter site Substack and news outlets including Reuters and The New York Times, according to The Times’s analysis. The delay to load links from X was relatively minor — about 4.5 seconds — but still noticeable, according to the analysis. Several of the services that were throttled have faced the ire of X’s owner, Elon Musk.
By Tuesday afternoon, the delay to reaching the news sites appeared to have lifted, according to The Times’s analysis.
My own spot test found that the throttling appears to be gone as well.
In the end, a short time delay is certainly not a huge deal, but it does, again, show how Elon is willing to weaponize the tools at his disposal to try to hurt those he dislikes, and does so in a way that is both transparently obvious and silly, but which seems less likely to be immediately noticed.
It is, of course, also another example of how fickle Musk’s actual commitment to “free speech” is. This is not new of course, and he is free to do this if he wants to. But he shouldn’t pretend that his view of free speech is somehow more noble than old Twitter’s when his reasons for such throttling are transparently petty payback, rather than based on any coherent policy.
Whatever you thought of old Twitter’s moderation practices, they were at least actually based on policy, and not whatever personally irked Jack or the trust & safety team.
Filed Under: elon musk, links, petty, throttling, vindictive
Companies: facebook, instagram, meta, ny times, substack, twitter, x
On Social Media Nazi Bars, Tradeoffs, And The Impossibility Of Content Moderation At Scale
from the decentralizing-the-nazi-bar-problem dept
A few weeks ago I wrote about an interview that Substack CEO Chris Best did about his company’s new offering, Substack Notes, and his unwillingness to answer questions about specific content moderation hypotheticals. As I said at the time, the worst part was Best’s unwillingness to just own up to what he was saying were the site’s content moderation plans, which was that they would be quite open to hosting the speech of almost anyone, no matter how terrible. That’s a decision that you can make (in the US at least), but if you’re going to do that, you have to be willing to own the decision that you’re making and be clear about it, which Best was unwilling to do.
I compared it the “Nazi bar” problem that has been widely discussed on social media in the past, where if you own a bar, and don’t kick the Nazis out up front, you get the reputation as a “Nazi bar” that is difficult to get rid of.
It was interesting to see the response to this piece. Some people got mad, claiming it was unfair to call Best a Nazi, even though I was not doing that. As in the story of the Nazi bar, no one is claiming that the bar owner is a Nazi, just that the public reputation of his bar would be that it’s a Nazi bar. That was the larger point. Your reputation is what you allow, and if you’re taking a stance that you don’t want to get involved at all, and you want to allow such things, that’s the reputation that’s going to stick.
I wasn’t calling Best a Nazi or a Nazi sympathizer. I was saying that if he can’t answer a straightforward question like the one that Nilay Patel asked him, Nazis are going to interpret that as he’s welcoming them in, and they will act accordingly. So too will people who don’t want to be seen hanging out at the Nazi bar. The vaunted “marketplace of ideas” includes the ability for a large group of people to say “we don’t want to be associated with that at all…” and to find somewhere else to go.
And this brings us to Bluesky. I’ve written a bunch about Bluesky going back to Jack Dorsey’s initial announcement which cited my paper among others as part of the inspiration for betting on protocols.
As Bluesky has gained a lot of attention over the past week or so, there have been a lot of questions raised about its content moderation plans. A lot of people, in particular, seem confused by its plans for composable moderation, which we spoke about a few weeks ago. I’ve even had a few people suggest to me that Bluesky’s plans represented a similar kind of “Nazi bar” problem as Best’s interview did, in particular because their initial reference implementation shows “hate speech” as a toggle.
I’ve also seen some people claim (falsely) that Bluesky would refuse to remove Nazis based on this. I think there is some confusion here, and it’s important to go deeper on how this might work. I have no direct insight into Bluesky’s plans. And they will likely make big mistakes, because everyone in this space makes mistakes. It’s impossible not to. And, who knows, perhaps they will run into their own Nazi bar problem, but I think there are some differences that are worth exploring here. And those differences suggest that Bluesky is better positioned not to be the Nazi bar.
The first is that, as I noted in the original piece about Best, there’s a big difference between a centralized service and its moderation choices, and a decentralized protocol. Bluesky is a bit confusing to some as it’s trying to do both things. Its larger goal is to build, promote, and support the open AT Protocol as an open social media protocol for a decentralized social media system with portable identification. Bluesky itself is a reference app for the protocol, showing how things can be done — and, as such it has to do content moderation tasks to avoid Bluesky itself running into the Nazi bar problem. And, at least so far, it seems to be doing that.
The team at Bluesky seems to recognize this. Unlike Best, they’re not refusing to answer the question, they’re talking openly about the challenges here, but so far have been willing to remove truly disruptive participants, as CEO Jay Graber notes here:
But, they definitely also recognize that content moderation at scale is impossible to do well, and believe that they need a different approach. And, again, the team at Bluesky recognizes at least some of the challenges facing them:
But, this is where things get potentially more interesting. Under a traditional centralized social media setup, there is one single decision maker who has to make the calls. And then you’re in a sort of benevolent dictator setup (or at least you hope so, as the malicious dictator threat becomes real).
And this is where we go on a little tangent about content moderation: again, it’s not just difficult. It’s not just “hard” to do. It’s impossible to do well. The people who are moderated, with rare exceptions, will disagree with your moderation decisions. And, while many people think that there are a whole bunch of obvious cases and just a few that are a little fuzzy, the reality (this is part of the scale part) is that there are a ton of borderline cases that all come down to very subjective calls over what does or does not violate a policy.
To some extent, going straight to the “Nazi” example is unfair, because there’s a huge spectrum between the user who is a hateful bigot, deliberately trying to cause trouble, and the good helpful user who is trying to do well. There’s a very wide range in the middle and where people draw their own lines will differ massively. Some of them may include inadvertent or ignorant assholery. Some of it may just include trolling. Or sometimes there are jokes that some people find funny, and others find threatening. Sometimes people are just scared and lash out out of fear or confusion. Some people feel cornered, and get defensive when they should be looking inward.
Humans are fucking messy.
And this is where the protocol approach with composable moderation becomes a lot more interesting. On the most extreme calls, the ones where there are legal requirements, such as child sexual abuse material and copyright infringement, for example, those can be removed at the protocol level. But as you start moving up into the more murky areas, where many of the calls are subjective (not so much: “is this person a Nazi” but more along the lines of “is this person deliberately trolling, or just uninformed…”) the composable moderation system begins to let (1) the end users make their own rules and (2) enable any number of 3rd parties to build tools to work with those rules.
Some people may (for perfectly good reasons, bad reasons, or no reasons at all) just not have any tolerance for any kind of ignorance. Others may be more open to it, perhaps hoping to guide ignorance to knowledge. Just as an example, outside of the “hateful” space, we’ve talked before about things like “eating disorder” communities. One of the notable things there was that when those communities were on more mainstream services, people who had gotten over an eating disorder would often go back to those communities and provide help and support to those who needed it. When those communities were booted from the mainstream services, that actually became much more difficult, and the communities became angrier and more insulated, and there was less ability for people to help those in need.
That is, there will still need to be some decision making at the protocol level (this is something that people who insist on “totally censorship proof” systems seem to miss: if you do this, eventually the government is going to shut you down for hosting CSAM), but the more of the decision making that can be pushed to a different level and the more control put in the hands of the user, the better.
This allows for more competition for better moderation, first of all, but also allows for the variance in preferences, which is what you see in the simple version that Bluesky implemented. The biggest decisions can be made at the protocol level, but above that, let there be competitive approaches and more user control. It’s unclear exactly where Bluesky the service will come down in the end, but the early indications from what’s been said so far are that the service level “Bluesky” will be more aggressive in moderating, while the protocol level “AT Protocol” will be more open.
And… that’s probably how it should be. Even the worst people should be able to use a telephone or email. But, enabling competition at the service level AND at the moderation level, creates more of the vaunted “marketplace of ideas” where (unlike what some people think the marketplace of ideas is about), if you’re regularly a disruptive, disingenuous, or malicious asshole, you are much more likely to get less (or possibly no) attention from the popular moderation services and algorithms. Those are the consequences of your own actions. But you don’t get banned from the protocol.
To some extent, we’ve already seen this play out (in a slightly different form) with Mastodon. Truly awful sites like Gab, and ridiculously pathetic sites like Truth Social, both use the underlying ActivityPub and open source Mastodon code, but they have been defederated from the rest of the fediverse. They still get to use the underlying technology, but they don’t get to use it to be obnoxiously disruptive to the main userbase who wants nothing to do with them.
With AT Protocol, and the concept of composable moderation, this can get taken even further. Rather than just having to choose your server, and be at the whims of that server admin’s moderation choices (or the pressure from other instances which keeps many instances in check and aligned), the AT Protocol setup allows for a more granular and fluid system, where there can be a lot more user empowerment, without having to resort to banning certain users from using the technology entirely.
This will never satisfy some people, who will continue to insist that the only way to stop a “bad” person is to ban them from basically any opportunity to use communications infrastructure. However, I disagree for multiple reasons. First, as noted above, outside of the worst of the worst, deciding who is “good” and who is “bad” is way more complicated and fraught and subjective than people like to note, and where and how you draw those lines will differ for almost everyone. And people who are quick to draw those lines should realize that… some other day, someone who dislikes you might be drawing those lines too. And, as the eating disorder case study demonstrated, there’s a lot more complexity and nuance than many people believe.
That’s why a decentralized solution is so much better than a centralized one. With a decentralized system you don’t have to be worrying about yourself getting cut out either. Everyone gets to set their own rules and their own conditions and their own preferences. And, if you’re correct that the truly awful people are truly awful, then it’s likely that most moderation tools and most servers will treat them as such, and you can rely on that, rather than having them cut off at the underlying protocol level.
It’s also interesting to also see how the decentralized social media protocol nostr is handling this as well. While it appears that some of the initial thinking behind it was the idea that nothing should ever be taken down, it appears that many are recognizing how impossible that is, and they’re now having really thoughtful discussions on “bottom up content moderation” specifically to avoid the “Nazi bar” problem.
Eventually in the process, thoughtful people recognize that a community needs some level of norms and rules. The question is how are those created, how are they implemented, and how are they enforced and by whom. A decentralized system allows for much greater control by end users to have the systems and communities that more closely match their own preferences, rather than requiring the centralized authority handle everything, and be able to live up to everyone’s expectations.
As such, you may end up with results like Mastodon/ActivityPub, where “Nazi bar” areas still form, but they are wholly separated from other users. Or you may end up with a result where the worst users are still there, shouting into the wind with no one bothering to listen, because no one wants to hear them. Or, possibly, it will be something else entirely as people experiment with new approaches enabled by a composable moderation system.
I’ll add one other note on that, because there are times when I’ve discussed this that people highlight that there are other forms of harassment or other kinds of risks beyond direct harassment. And just blocking a user does not stop them from harassing or encouraging or directing harassment against another. This is absolutely true. But, this kind of setup does also allow for better tooling for potentially monitoring such a thing without having to be exposed to it directly. This could take the form of Block Party’s “lockout folder” where you can have a trusted third party review the harassing messages you’ve been receiving rather than having to go through it yourself, or, conceivably. other monitoring and warning services could pop up, that could track people who are doing awful things, try to keep them from succeeding, and alert the proper people if things require escalation.
In short, decentralizing things, and allowing many different approaches, and open systems and tooling doesn’t solve all problems, but it presents some creative ways to handle the Nazi Bar problem that seem likely to be a lot more effective than living in denial and staring blankly into the Zoom screen as a reporter asks you a fairly basic question about how you’ll handle racist assholes on your platform.
Filed Under: competition, composable moderation, content moderation, decentralization, nazi bar, platforms, protocols, protocols not platforms
Companies: bluesky, substack
Substack CEO Chris Best Doesn’t Realize He’s Just Become The Nazi Bar
from the just-fucking-own-it dept
I get it. I totally get it. Every tech dude comes along and has this thought: “hey, we’ll be the free speech social media site. We won’t do any moderation beyond what’s required.” Even Twitter initially thought this. But then everyone discovers reality. Some discover it faster than others, but everyone discovers it. First, you realize that there’s spam. Or illegal content such as child sexual abuse material. And if that doesn’t do it for you, the copyright police will.
But, then you realize that beyond spam and content that breaks the rules, you end up with malicious users who cause trouble. And trouble drives away users, advertisers, or both. And if you don’t deal with the malicious users, the malicious users define you. It’s the “oh shit, this is a Nazi bar now” problem.
And, look, sure, in the US, you can run the Nazi bar, thanks to the 1st Amendment. But running a Nazi bar is not winning any free speech awards. It’s not standing up for free speech. It’s building your own brand as the Nazi bar and abdicating your own free speech rights of association to kick Nazis out of your private property, and to craft a different kind of community. Let the Nazis build their own bar, or everyone will just assume you’re a Nazi too.
It was understandable a decade ago, before the idea of “trust & safety” was a thing, that not everyone would understand all this. But it is unacceptable for the CEO of a social media site today to not realize this.
Enter Substack CEO Chris Best.
Substack has faced a few controversies regarding the content moderation (or lack thereof) for its main service, which allows writers to create blogs with subscription services built in. I had been a fan of the service since it launched (and had actually spoken with one of the founders pre-launch to discuss the company’s plans, and even whether or not we could do something with them as Techdirt), as I think it’s been incredibly powerful as a tool for independent media. But, the exec team there often seems to have taken a “head in sand” approach to understanding any of this.
That became ridiculously clear on Thursday when Chris Best went on Nilay Patel’s Decoder podcast at the Verge to talk about Substack’s new Notes product, which everyone is (fairly or not) comparing to Twitter. Best had to know that content moderation questions were coming, but seemed not just unprepared for them, but completely out of his depth.
This clip is just damning. Chris just trying to stare down Nilay just doesn’t work.
The larger discussion is worth listening to, or reading below. As Nilay notes in his commentary on the transcript, he feels that there should be much less moderation the closer you get to being an infrastructure provider (this is something I not only agree with, but have spent a lot of time discussing). Substack has long argued that its more hands-off approach in providing its platform to writers is because it’s more like infrastructure.
But the Notes feature takes the company closer to consumer facing social media, and so Nilay had some good questions about that, which Chris just refused to engage with. Here’s the full context that provides more than just the video above. The bold text is Nilay and the non-bold is Chris:
Notes is the most consumer-y feature. You’re saying it’s inheriting a bunch of expectations from the consumer social platforms, whether or not you really want it to, right? It’s inheriting the expectations of Twitter, even from Twitter itself. It’s inheriting the expectations that you should be able to flirt with people and not have to subscribe to their email lists.
In that spectrum of content moderation, it’s the tip of the spear. The expectations are that you will moderate that thing just like any big social platform will moderate. Up until now, you’ve had the out of being able to say, “Look, we are an enterprise software provider. If people don’t want to pay for this newsletter that’s full of anti-vax information, fine. If people don’t want to pay or subscribe to this newsletter where somebody has harsh views on trans people, fine.” That’s the choice. The market will do it. And because you’re the enterprise software provider, you’ve had some cover. When you run a social network that inherits all the expectations of a social network and people start posting that stuff and the feed is algorithmic and that’s what gets engagement, that’s a real problem for you. Have you thought about how you’re going to moderate Notes?
We think about this stuff a lot, you might be surprised to learn.
I know you do, but this is a very different product.
Here’s how I think about this: Substack is neither an enterprise software provider nor a social network in the mold that we’re used to experiencing them. Our self-conception, the thing that we are attempting to build, and I think if you look at the constituent pieces, in fact, the emerging reality is that we are a new thing called the subscription network, where people are subscribing directly to others, where the order in the system is sort of emergent from the empowered — not just the readers but also the writers: the people who are able to set the rules for their communities, for their piece of Substack. And we believe that we can make something different and better than what came before with social networking.
The way that I think about this is, if we draw a distinction between moderation and censorship, where moderation is, “Hey, I want to be a part of a community, of a place where there’s a vibe or there’s a set of rules or there’s a set of norms or there’s an expectation of what I’m going to see or not see that is good for me, and the thing that I’m coming to is going to try to enforce that set of rules,” versus censorship, where you come and say, “Although you may want to be a part of this thing and this other person may want to be a part of it, too, and you may want to talk to each other and send emails, a third party’s going to step in and say, ‘You shall not do that. We shall prevent that.’”
And I think, with the legacy social networks, the business model has pulled those feeds ever closer. There hasn’t been a great idea for how we do moderation without censorship, and I think, in a subscription network, that becomes possible.
Wow. I mean, I just want to be clear, if somebody shows up on Substack and says “all brown people are animals and they shouldn’t be allowed in America,” you’re going to censor that. That’s just flatly against your terms of service.
So, we do have a terms of service that have narrowly prescribed things that are not allowed.
That one I’m pretty sure is just flatly against your terms of service. You would not allow that one. That’s why I picked it.
So there are extreme cases, and I’m not going to get into the–
Wait. Hold on. In America in 2023, that is not so extreme, right? “We should not allow as many brown people in the country.” Not so extreme. Do you allow that on Substack? Would you allow that on Substack Notes?
I think the way that we think about this is we want to put the writers and the readers in charge–
No, I really want you to answer that question. Is that allowed on Substack Notes? “We should not allow brown people in the country.”
I’m not going to get into gotcha content moderation.
This is not a gotcha… I’m a brown person. Do you think people on Substack should say I should get kicked out of the country?
I’m not going to engage in content moderation, “Would you or won’t you this or that?”
That one is black and white, and I just want to be clear: I’ve talked to a lot of social network CEOs, and they would have no hesitation telling me that that was against their moderation rules.
Yeah. We’re not going to get into specific “would you or won’t you” content moderation questions.
Why?
I don’t think it’s a useful way to talk about this stuff.
But it’s the thing that you have to do. I mean, you have to make these decisions, don’t you?
The way that we think about this is, yes, there is going to be a terms of service. We have content policies that are deliberately tuned to allow lots of things that we disagree with, that we strongly disagree with. We think we have a strong commitment to freedom of speech, freedom of the press. We think these are essential ingredients in a free society. We think that it would be a failure for us to build a new kind of network that can’t support those ideals. And we want to design the network in a way where people are in control of their experience, where they’re able to do that stuff. We’re at the very early innings of that. We don’t have all the answers for how those things will work. We are making a new thing. And literally, we launched this thing one day ago. We’re going to have to figure a lot of this stuff out. I don’t think…
You have to figure out, “Should we allow overt racism on Substack Notes?” You have to figure that out.
No, I’m not going to engage in speculation or specific “would you allow this or that” content.
You know this is a very bad response to this question, right? You’re aware that you’ve blundered into this. You should just say no. And I’m wondering what’s keeping you from just saying no.
I have a blanket [policy that] I don’t think it’s useful to get into “would you allow this or that thing on Substack.”
If I read you your own terms of service, will you agree that this prohibition is in that terms of service?
I don’t think that’s a useful exercise.
Okay. I’m granting you the out that when you’re the email service provider, you should have a looser moderation rule. There are a lot of my listeners and a lot of people out there who do not agree with me on that. I’ll give you the out that, as the email service provider, you can have looser moderation rules because that is sort of a market-driven thing, but when you make the consumer product, my belief is that you should have higher moderation rules. And so, I’m just wondering, applying the blanket, I understand why that was your answer in the past. It’s just there’s a piece here that I’m missing. Now that it’s the consumer product, do you not think that it should have a different set of moderation standards?
You are free to have that belief. And I do think it’s possible that there will be different moderation standards. I do think it’s an interesting thing. I think the place that we maybe differ is you’re coming at this from a point where you think that because something is bad… let’s grant that this thing is a terrible, bad thing…
Yeah, I think you should grant that this idea is bad.
That therefore censorship of it is the most effective tool to prevent that. And I think we’ve run, in my estimation over the past five years, however long it’s been, a grand experiment in the idea that pervasive censorship successfully combats ideas that the owners of the platforms don’t like. And my read is that that hasn’t actually worked. That hasn’t been a success. It hasn’t caused those ideas not to exist. It hasn’t built trust. It hasn’t ended polarization. It hasn’t done any of those things. And I don’t think that taking the approach that the legacy platforms have taken and expecting it to have different outcomes is obviously the right answer the way that you seem to be presenting it to be. I don’t think that that’s a question of whether some particular objection or belief is right or wrong.
I understand the philosophical argument. I want to be clear. I think government speech regulations are horrible, right? I think that’s bad. I don’t think there should be government censorship in this country, but I think companies should state their values and go out into the marketplace and live up to their values. I think the platform companies, for better or worse, have missed it on their values a lot for a variety of reasons. When I ask you this question, [I’m asking], “Do you make software to spread abhorrent views, that allows abhorrent views to spread?” That’s just a statement of values. That’s why you have terms of service. I know that there’s stuff that you won’t allow Substack to be used for because I can read it in your terms of service. Here, I’m asking you something that I know is against your terms of service, and your position is that you refuse to say it’s against your terms of service. That feels like not a big philosophical conversation about freedom of speech, which I will have at the drop of a hat, as listeners to this showknow. Actually, you’re saying, “You know what? I don’t want to state my values.” And I’m just wondering why that is.
I think the conversation about freedom of speech is the essential conversation to have. I don’t think this “let me play a gotcha and ask this or that”–
Substack is not the government. Substack is a company that competes in the marketplace.
Substack is not the government, but we still believe that it’s essential to promote freedom of the press and freedom of speech. We don’t think that that is a thing that’s limited to…
So if Substack Notes becomes overrun by racism and transphobia, that’s fine with you?
We’re going to have to work very hard to make Substack Notes be a great place to have the readers and the writers be in charge, where you can have the kinds of conversations that you find valuable. That’s the exciting challenge that we have ahead of us.
I get the academic aspect of where Chris is coming from. He’s correct that content moderation hasn’t made crazy ideas go away. These are the reasons I coined the Streisand Effect years ago, to point out the futility of just trying to stifle speech. And these are the reasons I talk about “protocols, not platforms” as a way to explore enabling more speech without centralized systems that suppress speech.
But Substack is a centralized system. And a centralized system that doesn’t do trust & safety… is the Nazi bar. And if you have some other system that you think allows for “moderation but not censorship” then be fucking explicit about what it is. There are all sorts of interventions short of removing content that have been shown to work well (though, with other social media, they still get accused of “censorship” for literally expressing more speech). But the details matter. A lot.
I get that he thinks his focus is on providing tools, but even so two things stand out: (1) he’s wrong about how all this works and (2) even if he believes that Substack doesn’t need to moderate, he has to own that in the interview rather than claiming that Nilay is playing gotcha with him.
If you’re not going to moderate, and you don’t care that the biggest draws on your platform are pure nonsense peddlers preying on the most gullible people to get their subscriptions, fucking own it, Chris.
Say it. Say that you’re the Nazi bar and you’re proud of it.
Say “we believe that writers on our platform can publish anything they want, no matter how ridiculous, or hateful, or wrong.” Don’t hide from the question. You claim you’re enabling free speech, so own it. Don’t hide behind some lofty goals about “freedom of the press” when you’re really enabling “freedom of the grifters.”
You have every right to allow that on your platform. But the whole point of everyone eventually coming to terms with the content moderation learning curve, and the fact that private businesses are private and not the government, is that what you allow on your platform is what sticks to you. It’s your reputation at play.
And your reputation when you refuse to moderate is not “the grand enabler of free speech.” Because it’s the internet itself that is the grand enabler of free speech. When you’re a private centralized company and you don’t deal with hateful content on your site, you’re the Nazi bar.
Most companies that want to get large enough recognize that playing to the grifters and the nonsense peddlers works for a limited amount of time, before you get the Nazi bar reputation, and your growth is limited. And, in the US, you’re legally allowed to become the Nazi bar, but you should at least embrace that, and not pretend you have some grand principled strategy.
This is what Nilay was getting at. When you’re not the government, you can set whatever rules you want, and the rules you set are the rules that will define what you are as a service. Chris Best wants to pretend that Substack isn’t the Nazi bar, while he’s eagerly making it clear that it is.
It’s stupidly short-sighted, and no, it won’t support free speech. Because people who don’t want to hang out at the Nazi bar will just go elsewhere.
Filed Under: chris best, content moderation, free speech, nazi bar, nilay patel, private property
Companies: substack
After Matt Taibbi Leaves Twitter, Elon Musk ‘Shadow Bans’ All Of Taibbi’s Tweets, Including The Twitter Files
from the a-show-in-three-acts dept
The refrain to remember with Twitter under Elon Musk: it can always get dumber.
Quick(ish) recap:
On Thursday, Musk’s original hand-picked Twitter Files scribe, Matt Taibbi, went on Mehdi Hasan’s show (which Taibbi explicitly demanded from Hasan, after Hasan asked about Taibbi’s opinions on Musk blocking accounts for Modi in India). The interview did not go well for Taibbi in the same manner that finding an iceberg did not go well for the Titanic.
One segment of the absolutely brutal interview involves Hasan asking Taibbi the very question that Taibbi had said he wanted to come on the show to answer: what was his opinion of Musk blocking Twitter accounts in India, including those of journalists and activists, that were critical of the Modi government? Hasan notes that Taibbi has talked up how he believes Musk is supporting free speech, and asked Taibbi if he’d like to criticize the blocking of journalists.
Taibbi refused to do so, and claimed he doesn’t really know about the story, even though it was the very story that Hasan initially tweeted about that resulted in Taibbi saying he’d tell Hasan his opinion on the story if he was invited on the show. It was, well, embarrassing to watch Taibbi squirm as he knew he couldn’t say anything critical about Musk. He already saw how the second Twitter Files scribe, Bari Weiss, was ex-communicated from the Church of Musk for criticizing Musk’s banning of journalists.
The conversation was embarrassing in real time:
Hasan: What’s interesting about Elon Musk is that, we’ve checked, you’ve tweeted over thirty times about Musk since he announced he was going to buy Twitter last April, and not a word of criticism about him in any of those thirty plus tweets. Musk is a billionaire who’s been found to have violated labor laws multiples times, including in the past few days. He’s attacked labor unions, reportedly fired employees on a whim, slammed the idea of a wealth tax. Told his millions of followers to vote Republican last year, and in response to a right-wing coup against Bolivian leftist President Evo Morales tweeted “we’ll coup whoever we want.”
And yet, you’ve been silent on all that.
How did you go, Matt, from being the scourge of Wall St. The man who called Goldman Sachs the Vampire Squid, to be unwilling to say anything critical at all about this right wing reactionary anti-union billionaire.
Taibbi: Look….[long pause… then a sigh]. So… so… I like Elon Musk. I met him. This is part of the calculation when you do one of these stories. Are they going to give you information that’s gonna make you look stupid. Do you think their motives are sincere about doing x or y…. I did. I thought his motives were sincere about the Twitter Files. And I admired them. I thought he did a tremendous public service in opening the files up. But that doesn’t mean I have to agree with him about everything.
Hasan: I agree with you. But you never disagree with him. You’ve gone silent. Some would say that’s access journalism.
Taibbi: No! No. I haven’t done… I haven’t reported anything that limits my ability to talk about Elon Musk…
Hasan: So will you criticize him today? For banning journalists, for working with Modi government to shut down speech, for being anti-union. You can go for it. I’ll give you as much time as you’d like. Would you like to criticize Musk now?
Taibbi: No, I don’t particularly want to… uh… look, I didn’t criticize him really before… uh… and… I think that what the Twitter Files are is a step in the right direction…
Hasan: But it’s the same Twitter he’s running right now…
Taibbi: I don’t have to disagree with him… if you wanna ask… a question in bad faith…
[crosstalk]
Hasan: It’s not in bad faith, Matt!
Taibbi: It absolutely is!
Hasan: Hold on, hold on, let me finish my question. You saying that he’s good for Twitter and good for speech. I’m saying that he’s using Twitter to help one of the most rightwing governments in the world censor speech. I will criticize that. Will you?
Taibbi: I have to look at the story first. I’m not looking at it now!
By Friday, that exchange became even more embarrassing. Because, due to a separate dispute that Elon was having with Substack (more on that in a bit), he decided to arbitrarily bar anyone from retweeting, replying, or even liking any tweet that had a Substack link in it. But Taibbi’s vast income stems from having one of the largest paying Substack subscriber bases. So, in rapid succession he announced that he was leaving Twitter, and would rely on Substack, and that this would likely limit his ability to continue working on the Twitter Files. Minutes later, Elon Musk unfollowed Taibbi on Twitter.
Quite a shift in the Musk/Taibbi relationship in 24 hours.
Then came Saturday. First Musk made up some complete bullshit about both Substack and Taibbi, claiming that Taibbi was an employee of Substack, and also that Substack was violating their (rapidly changing to retcon whatever petty angry outburst Musk has) API rules.
Somewhat hilariously, the Community Notes feature — which old Twitter had created, though once Musk changed its name from “Birdwatch” to “Community Notes,” he acted as if it was his greatest invention — is correcting Musk:
Because also either late Friday or early Saturday, Musk had added substack.com to Twitter’s list of “unsafe” URLs, suggesting that it may contain malicious links that could steal information. Of course, the only one malicious here was Musk.
Also correcting Musk? Substack founder Chris Best:
Then, a little later on Saturday, people realized that searching for Matt Taibbi’s account… turned up nothing. Taibbi wrote on Substack that he believed all his Twitter Files had been “removed” as first pointed out by William LeGate:
But, if you dug into Taibbi’s Twitter account, you could still find them. Mashable’s Matt Binder solved the mystery and revealed, somewhat hilariously, that Taibbi’s acount appears to have been “max deboosted” or, in Twitter’s terms, had the highest level of visibility filters applied, meaning you can’t find Taibbi in search. Or, in the parlance of today, he shadowbanned Matt Taibbi.
Again, this shouldn’t be a surprise, even though the irony is super thick. Early Twitter Files revealed that Twitter had long used visibility filtering to limit the spread of certain accounts. Musk screamed about how this was horrible shadowbanning… but then proceeded to use those tools to suppress speech of people he disliked. And now he’s using the tool, at max power, to hide Taibbi and the very files that we were (falsely) told “exposed” how old Twitter shadow banned people.
This is way more ironic than the Alanis song.
So, yes, we went from Taibbi praising Elon Musk for supporting free speech and supposedly helping to expose the evil shadowbanning of the old regime, and refusing to criticize Musk on anything, to Taibbi leaving Twitter, and Musk not just unfollowing him but shadowbanning him and all his Twitter Files.
In about 48 hours.
Absolutely incredible.
Just a stunning show of leopard face eating.
Not much happened then on Sunday, though Twitter first added a redirect on any searches for “substack” to “newsletters” (what?) and then quietly stopped throttling links to Substack, though no explanation was given. And as far as I can tell, Taibbi’s account is still “max deboosted.”
Anyway, again, to be clear: Elon Musk is perfectly within his rights to be as arbitrary and capricious as he wants to be with his own site. But can people please stop pretending his actions have literally anything to do with “free speech”?
Filed Under: content moderation, elon musk, matt taibbi, mehdi hasan, shadowbanning, twitter files, visibility filtering
Companies: substack, twitter
Elon Musk Blocks Likes And Replies To Tweets That Link To Substack In A Pique Of Pettiness
from the free-speech,-but-not-that-speech dept
Poor Matt Taibbi. He destroyed his credibility to take on the Twitter Files, and did so in part to raise the profile of his Substack site, Racket News. Indeed, Substack has become a home for nonsense peddlers of all kinds to create their own little bubbles of nonsense. In congressional testimony, Taibbi admitted that having Elon Musk hand pick him to deliver the “Twitter Files” has increased the number of paying subscribers to his Substack (though he defended it by claiming that the money has all gone towards journalism).
But… apparently Elon has decided that no one on Twitter is allowed to even like or reply to any tweet that links to a Substack site. Including to Taibbi’s. Oops.
Let’s back up, though. You may recall that back in December, as the number of people deserting Twitter became scary, Twitter instituted a new policy saying that you were not allowed to mention a somewhat arbitrary and random grab bag of other social media sites.
A day or so later, after many people yelled about it (and his Mom was the only one defending it), Elon rolled back that policy, admitting that it “was a mistake.”
Of course, since then, he’s systematically moved to make it more and more difficult to move to services like Mastodon, but at least people are still able to link to Mastodon and other social media.
But now, suddenly Substack is a problem? Twitter will still allow users to send a tweet with a link to a Substack page, but that tweet can no longer be liked, replied to, or retweeted. Basically, tweets with Substack links are dead in the water.
It seems that Substack’s “crime” is releasing a tool for more short form content that looks a bit like Twitter, called “Notes.”
And thus, the world’s pettiest man has decided to retaliate.
You could almost (but not really) understand banning links to Substack. But banning likes and replies? That’s just crazy. If you try to do any of those things with a tweet that links to Substack, you get an error message:
Amusingly, this is acting as a bit of a Streisand Effect for Notes. I had seen a headline fly by about it, but hadn’t looked at the details until now.
This move by Twitter impacts many people, amusingly including many in the Substack crowd who have been falsely going on and on about how Musk was a savior to their free speech. And now he’s blocking basically anyone promoting or interacting with their content.
And, among those impacted… Matt Taibbi, who threw all of his credibility eggs into the Musk basket. Just yesterday Taibbi literally refused to criticize Musk for anything during the Mehdi Hasan interview, saying he thought Musk was clearly good for free speech on Twitter. And today he’s saying that Twitter is now unusable:
Also, yesterday in the interview, I noted that it was funny that Taibbi claimed that the Biden campaign got special treatment from Twitter in that they could reach out to people there, but he couldn’t. So when someone asked him if he had reached out to Musk about the Substack blocks, Taibbi admitted that of course he had, though he hadn’t heard back yet:
Of course, maybe that explains why Taibbi refused to criticize Musk yesterday. Didn’t want to cut off that sweet, sweet, access.
Either way, considering just how frequently these capricious moves are being made by the “new” Twitter, it again raises questions why people are still relying on it as a key source of information and as a way to distribute their own content.
Update: This legitimately made me laugh outloud:
![Of all things: I learned earlier today that Substack links were being blocked on this platform.
When I asked why, I was told it’s a dispute over the new Substack Notes platform…
Since sharing links to my articles is a primary reason I come to this platform, I was alarmed and asked what was going on. I was given the option of posting articles on Twitter instead.
I’m obviously staying at Substack, and will be moving to Substack Notes next week.](https://i0.wp.com/www.techdirt.com/wp-content/uploads/2023/04/image-6.png?resize=602%2C433&ssl=1)
Update 2: So did this:
Filed Under: blocks, competition, elon musk, likes, pettiness, retweets
Companies: substack, twitter
Content Moderation Case Study: Newsletter Platform Substack Lets Users Make Most Of The Moderation Calls (2020)
from the newsletter-moderation dept
Summary: Substack launched in 2018, offering writers a place to engage in independent journalism and commentary. Looking to fill a perceived void in newsletter services, Substack gave writers an easy-to-use platform they could monetize through subscriptions and pageviews.
As Substack began to attract popular writers, concerns over published content began to increase. The perception was that Substack attracted an inordinate number of creators who had either been de-platformed elsewhere or embraced views not welcome on other platforms. High-profile writers who found themselves jobless after crafting controversial content appeared to gravitate to Substack (including big names like Glenn Greenwald of The Intercept and The Atlantic’s Andrew Sullivan), giving the platform the appearance of embracing views by providing a home for writers unwelcome pretty much everywhere else.
A few months before the current controversy over Substack’s content reached critical mass, the platform attempted to address questions about content moderation with a blog post that said most content decisions could be made by readers, rather than Substack itself. Its blog post made it clear users were in charge at all times: readers had no obligation to subscribe to content they didn’t like and writers were free to leave at any time if they disagreed with Substack’s decisions.
But even then, the platform’s moderation policies weren’t completely hands off. As its post pointed out, the platform would take its own steps to remove spam, porn, doxxing, and harassment. Of course, the counterargument raised was that Substack’s embrace of controversial contributors provided a home for people who’d engaged in harassment on other platforms (and who were often no longer welcome there).
Decisions to be made by Substack:
- Does offloading moderation to users increase the amount of potentially-objectionable content hosted by Substack?
- Does this form of moderation give Substack the appearance it approves of controversial content contributed by others?
- Is the company prepared to take a more hands-on approach if the amount of objectionable content hosted by Substack increases?
Questions and policy implications to consider:
- Does a policy that relies heavily on users and writers to enforce allow users and contributors to shape Substack’s “identity?”
- Does limiting moderation by Substack attract the sort of contributors Substack desires to host and/or believes will make it more profitable?
- Does the sharing of content off-platform undermine Substack’s belief that others have complete control over the kind of content they’re seeing?
Resolution: The controversy surrounding Substack’s roster of writers continued to increase, along with calls for the platform to do more to moderate hosted content. Subtack’s response was to retirate its embrace of “free press and free expression,” but also offered a few additional moderation tweaks not present in its policies when it first received increased attention late last year.
Most significantly, it announced it would not allow “hate speech” on its platform, although its definition was more expansive than policies on other social media services. Attacks on people based on race, ethnicity, religion, gender, etc. would not be permitted. However, Substack would continue to host attacks on “ideas, ideologies, organizations, or individuals for other reasons, even if those attacks are cruel and unfair.”
Originally posted to the Trust & Safety Foundation website.
Filed Under: content moderation, controversy, email, newsletters
Companies: substack
Techdirt Podcast Episode 259: A New Model For Independent Journalism, With Casey Newton
from the how-journos-get-paid dept
The origins of Techdirt lie in a newsletter that Mike started over 20 years ago, and in all that time, the business models for online journalism have never stopped evolving and changing, especially when it comes to independent reporting. Now, newsletters are making a comeback with a new model, driven especially by writers flocking to the Substack platform. One such person is technology journalist Casey Newton with his new Platformer newsletter, and this week Casey joins the podcast to discuss his experience and what it can teach us about the future of independent journalism online.
Follow the Techdirt Podcast on Soundcloud, subscribe via iTunes or Google Play, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.
Filed Under: business models, casey newton, journalism, podcast
Companies: substack