john matze – Techdirt (original) (raw)
Parler's Found A New Host (And A New CEO)… For Now
from the ah-look-at-that dept
On Monday Parler announced to the world that it was back with a new host (and a new interim CEO after the board fired founder and CEO John Matze a few weeks ago). The “board” is controlled by the (until recently, secret) other founder: Rebekah Mercer, who famously also funded Cambridge Analytica, a company built upon sucking up social media data and using it to influence elections. When Matze was fired, he told Reuters that the company was being run by two people since he’d been removed: Matthew Richardson and Mark Meckler.
Richardson has many ties to the Mercers, and was associated with Cambridge Analytica and the Brexit effort. Meckler was, for a few years, considered one of the founding players and leading spokespeople for the “Tea Party” movement in the US, before splitting with that group and pushing for a new Constitutional Convention (at times in a “strange bedfellows” way with Larry Lessig). With the news on Monday that Parler was back up (sort of), it was also announced that Meckler had taken over as interim CEO.
Given the role of Meckler, Richardson, and Mercer, you can bet that the site is still pushing to be the Trumpiest of social media sites. As for who is actually the new hosting firm, there’s been some confusion in the press. The twitter account @donk_enby, who famously scraped and archived most of the older Parler before it was shut down by Amazon last month, originally said Parler’s new hosting firm was CloudRoute, who it appears may just be a Microsoft Azure reseller of some kind. In a later tweet, @donk_enby mentions that another firm, SkySilik, seems to share an IP space with CloudRoute, perhaps renting IP addresses from CloudRoute.
A few hours later, SkySilk admitted to being the new hosting company and put out a weird statement that suggests a somewhat naive team who had no idea what they were getting into:
SkySilk, a Web infrastructure company based outside of Los Angeles, is now hosting Parler, SkySilk’s chief executive, Kevin Matossian, confirmed to NPR.
“SkySilk is well aware that Parler has received an aggressive response from those who believe their platform has been used as a safe haven for some bad actors,” Matossian said in a statement. “Let me be clear, Skysilk does not advocate nor condone hate, rather, it advocates the right to private judgment and rejects the role of being the judge, jury, and executioner.”
He said while the company many disagree with some of Parler’s content, he believes the company is taking “necessary steps” to monitor its platform.
“Once again, this is not a matter of SkySilk endorsing the message, but rather, the right of the messenger to deliver it. SkySilk will support Parler in their efforts to be a nonpartisan Public Square as we are convinced this is the only appropriate course of action,” he said in a statement.
Nonpartisan, eh? Remember, Parler has a long history of taking down “leftist” accounts and bragging about it. Bizarrely, SkySilk’s CEO is… also a Hollywood film producer.
Separately, it appears that the new Parler is not using the Russian service “DDoSGuard” for DDoS protection. It had been using it for some time while it had a placeholder page up, but that appears to no longer be the case.
Reports, including the NPR report linked here, note that the new Parler says that it will moderate content, using “an algorithm and human moderators” to take down content “that threatens or incites violence.” It also promises that there will be an appeals process for moderated content. The moderation will also include a “trolling filter” that will apparently hide, but not remove, “content that attacks someone based on race, sex, sexual orientation or religion.” People who wish to view such content can do so by clicking through.
All of the old content appears to have been wiped out, though the old accounts remain. There are also lots of reports claiming that the site is struggling to stay up (indeed, as I type this it appears to be down again).
This certainly seems somewhat shaky, at best, and it will be fun to watch and see if a random “cloud hosting” firm, run by a small time Hollywood producer, that almost no one has heard of, can actually handle this kind of traffic and attention.
Filed Under: content moderation, hosting, john matze, kevin matossian, mark meckler, matthew richardson, social media
Companies: parler, skysilk
Parler's CEO Promises That When It Comes Back… It'll Moderate Content… With An Algorithm
from the are-you-guys-serious? dept
Parler, Parler, Parler, Parler. Back in June of last year when Parler was getting lots of attention for being the new kid on the social media scene with a weird (and legally nonsensical) claim that it would only moderate “based on the 1st Amendment and the FCC” we noted just how absolutely naive this was, and how the company would have to moderate and would also have to face the same kinds of impossible content moderation choices that every other website eventually faces. In fact, we noted that the company (in part due to its influx of users) was seemingly speedrunning the content moderation learning curve.
Lots of idealistic, but incredibly naive, website founders jump into the scene and insist that, in the name of free speech they won’t moderate anything. But every one of them quickly learns that’s impossible. Sometimes that’s because the law requires you to moderate certain content. More often, it’s because you recognize that without any moderation, your website becomes unusable. It fills up with garbage, spam, harassment, abuse and more. And when that happens, it becomes unusable by normal people, drives away many, many users, and certainly drives away any potential advertisers. And, finally, in such an unusable state it may drive away vendors — like your hosting company that doesn’t want to deal with you any more.
And, as we noted, Parler’s claims not to moderate were always a part of the big lie. The company absolutely moderated, and the CEO even bragged to a reporter about banning “leftist trolls.” The whole “we’re the free speech platform” was little more than a marketing ploy to attract trolls and assholes, with a side helping of “we don’t want to invest in content moderation” like every other site has to.
Of course, as the details have come out in the Amazon suit, the company did do some moderation. Just slowly and badly. Last week, the company admitted that it had taken down posts from wacky lawyer L. Lin Wood in which he called for VP Mike Pence to face “firing squads.”
Amazon showed, quite clearly, that it gave Parler time to set up a real content moderation program, but the company blew it off. But now, recognizing it has to do something, Parler continues to completely reinvent all the mistakes of every social media platform that has come before it. Parler’s CEO, John Matze, is now saying it will come back with “algorithmic” content moderation. This was in an interview done on Fox News, of course.
“We?re going to be doing things a bit differently. The platform will be free speech first, and we will abide by and we will be promoting free speech, but we will be taking more algorithmic approaches to content but doing it to respect people?s privacy, too. We want people to have privacy and free speech, so we don?t want to track people. We don?t want to use their history and things of that nature to predict possible violations, but we will be having algorithms look at all the content ? to try and predict whether it?s a terms-of-service violation so we can adjust quicker and the most egregious things can get taken down,” Matze said. “So calls for violence, incitements, things of that nature, can be taken down immediately.”
This is… mostly word salad. The moderation issue and the privacy question are separate. So is the free speech issue. Just because people have free speech rights, it doesn’t mean that Parler (or anyone) has to assist them.
Also, Matze is about to learn (as every other company has) that algorithms can help a bit, but really won’t be of much help in the long run. Companies with much more resources, including Google and Facebook, have thrown algorithmic approaches to content moderation at their various platforms, and they are far from perfect. Parler will be starting from a much weaker position, and will almost certainly find that the algorithm doesn’t actually replace a true trust and safety program like most companies have.
In that interview, Matze is also stupidly snarky about Amazon’s tool, claiming:
“We even offered to Amazon to have our engineers immediately use Amazon services ? Amazon Rekognition and other tools ? to find that content and get rid of it quickly and Amazon said, ?That?s not enough,? so apparently they don?t believe their own tools can be good enough to meet their own standards,” he said.
That’s incredibly misleading, and makes Matze look silly. Amazon Rekognition is a facial recognition system. What does that have to do with moderating harassment, death threats, and abuse off your site? Absolutely nothing.
Instead of filing terrible lawsuits and making snarky comments, it’s stunning that Parler doesn’t shut up, find an actual expert on trust and safety to hire, and learn from what every other company has done in the past. That’s not to say it needs to handle the moderation in the same way. More variation and different approaches are always worth testing out. The problem is that you should do that from a position of knowledge and experience, not ignorance. Parler has apparently chosen the other path.
Filed Under: algorithm, content moderation, john matze
Companies: amazon, parler
Parler, Desperate For Attention, Pretends It Doesn't Need Section 230
from the playing-with-fire dept
One of the more bizarre parts of the Parler debate is the weird insistence among many in the Trumpist set that somehow taking away Section 230 will magically lead to less moderation, rather than more. This is almost certainly untrue, because assuming a shift to more traditional distributor liability rules as were considered in place prior to Section 230, websites would potentially face liability for content that violated the law if they were shown to have knowledge of the law-violating material.
We don’t have to look far to see such a system in practice: it’s how the DMCA’s Section 512 notice-and-takedown regime effectively works today. Under that regime, anyone who wants anything taken offline just files a notice, and if a website wishes to avoid liability, they then need to remove the content. That removal protects them from liability. Prior to the notice, it’s unlikely that they would be seen as liable, since they wouldn’t have notice of the content in question possibly violating a law. Of course, as we’ve seen, the DMCA’s notice-and-takedown provision is widely abused. Recent studies have shown that the notice-and-takedown provisions are regularly used to target non-infringing works and many sites pull down that content to avoid liability.
It’s quite likely that we’d see the same sort of result without 230, leading to significantly more removals of perfectly legal speech — which seems to be the exact opposite of what Trumpist fans of revoking 230 expect. Last month, we were happy to see that the Trumpist social media site, Parler, seemed to recognize this, and its CEO John Matze correctly pointed out that removing Section 230 would help the big companies and harm smaller competitors (though, hilariously, he tried to lump himself in as a big guy):
Section 230 is actually a really nice thing, because what it does is protect small businesses from liability, who are trying to compete with Big Tech. So I respectfully disagree, to some extent…. I don’t think the outright removal of 230 is a good idea, because it promotes competition and it actually helps the small guys more. 230, if it was removed, wouldn’t have a big impact on the companies with a large financial balance sheet, like Facebook, Twitter, and even Parler. We’d be okay. But any other competitors would get hurt the most.
He’s right (except the bit about him being a big guy — just because you’re co-founded by an ultra-wealthy Trumpist wacko, it doesn’t mean you’re a big player). Parler would face tremendous liability without 230, and at some point, the Mercers wouldn’t enjoy continuing to pay the company’s neverending legal bills.
Yet, on Monday, it seems that political pressure won out among Parler senior exec staff, and they put out a truly ridiculous press release, headlined that Parler now “welcomes” a full repeal of Section 230. If you read the actual press release, and not just the headline, you realize that the company’s complaint is not so much about 230, but what might happen if any 230 reform is driven by Facebook. And that is the proper concern. If Facebook gets to dictate the terms of 230 reform, it will be massively damaging to companies like Parler. And thus, one can read Parler’s press release as saying that if there were only two choices and those two choices were (1) Facebook-led reform or (2) a full repeal of 230, it would prefer a full repeal:
Today Parler executives declared that Parler and other free speech platforms would be better off if Section 230 of the 1996 Communications Decency Act were repealed than they would be under a politically feasible re-write of Section 230. Such a re-write would, they believe, further encourage speech-restrictive, content moderation policies of established tech giants Facebook, Twitter and YouTube. As it stands, the current interpretation of Section 230 already encourages these practices, by providing immunity from liability for removal of any content a platform’s leadership finds “objectionable.” If Mark Zuckerberg and his cronies in Congress have their way, these practices would be not only protected from liability, they would be mandated. Online platforms would, under a revised Section 230, become de facto censors, restricting speech that would otherwise be protected by the First Amendment. Not only is this morally wrong, it would simultaneously increase barriers to entry while limiting the ability of social media companies to compete by offering different policies.
Much of that paragraph is actually accurate! Many of the reform proposals would push companies to be more proactive in taking down protected content, though if they were actually mandated, it would make any such law infringe upon the 1st Amendment itself, and it would eventually get tossed as such (though that process would take a while).
That said, the framing here by Parler is immensely stupid. Most people took this to mean it wants a full repeal, rather than just preferring one when compared to a Facebook-led reform. And that’s how people are going to remember it. I’m also confused about how Parler thinks that Zuckerberg has “cronies” in Congress. Have they not watched the various hearings where both Republicans and Democrats seem hellbent on beating up on Zuck every chance they get? I honestly cannot name a single member of Congress who I think is supportive of Facebook at all.
Even more stupid, Parler CEO John Matze doubled down on this line of thinking on a post on his own feed (I’d link, but Parler makes it impossible to link directly to posts).
That says:
Reflecting on 2020 and revisiting the inauthentic testimonies from Zuckerberg, Sundar and Dorsey, and observing the role social media plays, today I am changing my stance on section 230. This was not an easy change, but thanks to the people on Parler, trusted advisors and our legal team, I am more confident.
Over the past year, Americans have watched and experienced viewpoint discrimination from Twitter, FB and Google. We have seen medical advice blocked and erased from the internet. We have seen climate politics pushed down our throats. We have seen election integrity concerns forcefully invalidated and we have seen tech tyrants attempting to control our democratic republic in the name of “preserving” it.
As it stands, Parler benefits from Section 230 today. We are afforded the same protections. But, overarchingly, 230 is bad for free speech and bad for our country. Parler does not need it, and rest assured we will come out just fine either way. The tyrants not so much.
This… makes no sense at all. If this is truly from Matze’s “trusted advisors” and the company’s “legal team” he should fire them and get new advisors and lawyers — and maybe stopping getting his information from idiots. All of the stuff he complains about would be significantly worse without 230. And, no, Parler would not come out fine either way. He may think so, with Rebekah Mercer’s big wallet backing him up, but legal fees add up quickly, and Parler would likely discover that faster than most. No one has an endless supply of cash, and the Mercer family, while rich, is not that rich.
Honestly, this stinks of a stupid publicity stunt by Parler, thinking that it’s hitting Facebook when it’s down. It’s pretty clear that Matze and Parler’s senior leadership team have little to no experience with how this stuff plays in DC. He’s just handed Facebook exactly the ammunition it wants to pressure Congress to reform 230 in the way it wants, because everyone is going to take these statements, ignore the weak and slightly misleading nuance, and remember “Parler thinks 230 reform is fine.”
It’s a stupid move, but no one said that the company was smart. Remember, these are the same guys who bizarrely claimed they’d only moderate based on “the 1st Amendment and the FCC” (whatever the fuck that means) and then quickly started banning people who made fun of Parler. And Matze is the guy who, while pretending to support free speech, also recently bragged to a reporter that he was sitting around banning leftist trolls who had shown up on his site.
It’s honestly not surprising, just disappointing, for Parler to take this self-defeating stance. It had a chance to be a useful participant in these discussions, and has instead decided to simply pull a prank to get itself some attention instead.
Filed Under: intermediary liability, john matze, rebekah mercer, section 230
Companies: facebook, parler
What If Cambridge Analytica Owned Its Own Social Network? CA Backer Rebekah Mercer Admits She's A Co-Founder Of Parler
from the oh-look-at-that dept
Since the election, Parler has found renewed life among Trump supporters who feel that… [checks notes] being fact-checked, or limited for sharing debunked and dangerous conspiracy theories, is somehow an attack on… something (reality: it’s an attack on their delusions). And as Parler has gotten a new round of attention, some questions were raised about the funding behind it. After all, there was no big VC or known investor behind the company, so it wasn’t entirely clear how it was surviving. There was a crazy and incorrect conspiracy theory making the rounds that it was owned by Trumpists’ favorite bogeyman, George Soros. That’s not true, though it would have been completely hilarious if it had been.
Late last week, a Twitter thread made the rounds trying to tie Parler’s founding to Russia, though it involves a lot of conjecture and guilt by association, rather than facts.
But then, on Friday, the Wall Street Journal revealed that Parler was being funded by the Mercer family, the same family who funded a bunch of pro-Trump projects, including Cambridge Analytica, Breitbart, and, well, Trump’s own presidential campaign. Over the weekend, Rebekah Mercer took it up a further notch by claiming that she was the co-founder of the company along with CEO John Matze.
That’s a post from Rebekah Mercer saying:
John and I started Parler to provide a neutral platform for free speech, as our founders intended, and also to create a social media environment that would protect data privacy. Benjamin Franklin warned us: “Whoever would overthrow the liberty of a nation must begin by subduing the freeness of speech.” The ever increasing tyranny and hubris of our tech overlords demands that someone lead the fight against data mining, and for the protection of free speech online. That someone is Parler, a beacon to all who value their liberty, free speech, and personal privacy.
That’s… information that is brand new. Prior to this weekend, it was always claimed that the company was founded by Matze and a friend, Jared Thomson. In the past they had claimed that they founded it together, and had some support from a “small group of close friends and employees.” But they never mentioned the Mercers. That’s some “friends.”
Also, Rebekah Mercer’s claims are pretty ridiculous when you pick them apart. Despite claiming that they’re setting up “a neutral platform for free speech” we know perfectly well that’s utter garbage. As we’ve seen, they’ve got no problem banning people they dislike for ideological reasons. In fact, despite claiming on the website that they would only take down content if it violated the 1st Amendment, the company quickly realized that it would have to ban a lot more than that.
My favorite example of this was in a Forbes interview, where Matze admits that he’s sitting around banning trolls as quickly as he can:
?I hope you don?t mind that I?m eating: I haven?t eaten all day,? says Parler founder John Matze, devouring a late lunch. His social media app?a new favorite of President Trump?s and other GOP leaders?has been under siege for the past few hours. ?I?m sitting here like, banning trolls.?
By trolls he means teenage leftists who?ve flooded onto Parler….
As we said, they’ve sort of speed-run the content moderation learning curve that every website goes through when they claim to support free speech. They insist they’ll allow anything. Then they start banning spammers. Then trolls. And, that’s the same damn thing Twitter does, and even here they’re admitting that they’re banning “leftist trolls.” In fact, over the past week or so we keep having people showing up on our article from the summer about Parler banning users it doesn’t like and screaming at us about how it’s okay because they’re just banning trolls. But, that’s the point. That’s what Twitter is doing too. Except that Twitter isn’t complaining about ideological trolls.
It’s only Parler that seems to be staking out an ideological claim, trying to ban “leftist” trolls after being cofounded by one of the most extreme partisans around, who laughably claims that Parler will be neutral.
The other incredibly ridiculous claim is that Parler is a response to “the ever increasing tyranny and hubris of our tech overlords” and that it is trying to “lead the fight against data mining.” Or that Parler is “a beacon to all those who value their liberty, free speech, and personal privacy.” We already covered how its views on free speech are not very different from other social media platforms, but the privacy claims are ridiculous as well.
Remember, Cambridge Analytica’s entire claim to fame was collecting a shit ton of data on people by abusing the rules on an academic’s personality quiz on Facebook, and then use it to target political messages. Remember, part of the reason Facebook got hit with a huge FTC fine was that it had let Cambridge Analytica extract a bunch of data it had promised not to.
Former Cambridge Analytica data expert Christopher Wylie noted this weekend that when he was there, the Mercers had always wanted their own social network in order to cut out the middle man and collect the data directly (and distribute propaganda directly). And now they’ve got that:
Parler is funded by the former owners of Cambridge Analytica. They always wanted to create a new social network to collect data and disseminate propaganda. And now they have. https://t.co/Lp9J1wR4Kp
— Christopher Wylie ?????????? (@chrisinsilico) November 14, 2020
Oh, and they’ve got access to a lot more private data than Facebook, Google or Twitter do. Hell, Matze practically brags to a Forbes reporter how he has all sorts of private data on the “leftist trolls” he’s trying to ban:
Matze knows the leftists? ages of the trolls, as he calls them, because some verified their accounts, coughing up selfies and driver?s licenses or passports (a set of highly unusual requirements for proving identity and registering for an online account).
Twitter doesn’t require you to upload your driver’s license. Also, if you want to have a verified account on Parler, you’re required to hand over your Social Security Number as well.
Basically, the Mercers are building a huge database of gullible idiots that they can now market propaganda to directly, cutting out the Facebook middleman. I’m curious how it was a huge problem when it was happening by Facebook, but somehow fine when it’s all directly on the Mercer’s own social network?
Filed Under: content moderation, free speech, john matze, mercer family, privacy, rebekah mercer, social media
Companies: cambridge analytica, parler
Parler Speedruns The Content Moderation Learning Curve; Goes From 'We Allow Everything' To 'We're The Good Censors' In Days
from the nice-one-guys dept
Over the last few weeks Parler has become the talk of Trumpist land, with promises of a social media site that “supports free speech.” The front page of the site insists that its content moderation is based on the standards of the FCC and the Supreme Court of the United States:
Of course, that’s nonsensical. The FCC’s regulations on speech do not apply to the internet, but just to broadcast television and radio over public spectrum. And, of course, the Supreme Court’s well-established parameters for 1st Amendment protected speech have been laid out pretty directly over the last century or so, but the way this is written they make it sound like any content to be moderated on Parler will first be reviewed by the Supreme Court, and that’s not how any of this works. Indeed, under Supreme Court precedent, very little speech is outside of the 1st Amendment these days, and we pointed out that Parler’s terms of service did not reflect much understanding of the nuances of Supreme Court jurisprudence on the 1st Amendment. Rather, it appeared to demonstrate the level of knowledge of a 20-something tech bro skimming a Wikipedia article about exceptions to the 1st Amendment and just grabbing the section headings without bothering to read the details (or talk to a 1st Amendment lawyer).
Besides, as we pointed out, Parler’s terms of service allow them to ban users or content for any reason whatsoever — suggesting they didn’t have much conviction behind their “we only moderate based on the FCC and the Supreme Court.” Elsewhere, Parler’s CEO says that “if you can say it on the street of New York, you can say it on Parler.” Or this nugget of nonsense:
?They can make any claim they?d like, but they?re going to be met with a lot of commenters, a lot of people who are going to disagree with them,? Matze said. ?That?s how society works, right? If you make a claim, people are going to come and fact check you organically.?
?You don?t need an editorial board of experts to determine what?s true and what?s not,? he added. ?The First Amendment was given to us so that we could all talk about issues, not have a single point of authority to determine what is correct and what?s not.?
Ah.
So, anyway, on Monday, we noted that Parler was actually banning a ton of users for a wide variety of reasons — most of which could be labeled simply as “trolling Parler.” People were going on to Parler to see what it would take to get themselves banned. This is trolling. And Parler banned a bunch of them. That resulted in Parler’s CEO, John Matze, putting out a statement about other things that are banned on Parler:
If you can’t read that, here’s what he says, with some annotations:
To the people complaining on Twitter about being banned on Parler. Please pay heed:
Literally no one is “complaining” about being banned on Parler. They’re mocking Parler for not living up to it’s pretend goals of only banning you for speech outside of 1st Amendment protections.
Here are the very few basic rules we need you to follow on Parler. If these are not to your liking, we apologize, but we will enforce:
Good for you. It’s important to recognize — just as we said — that any website that hosts 3rd party content will eventually have to come up with some plan to enforce some level of content moderation. You claimed you wouldn’t do that. Indeed, just days earlier you had said that people could “make any claim they’d like” and also that you were going to follow the Supreme Court’s limits on the 1st Amendment, not your own content moderation rules.
When you disagree with someone, posting pictures of your fecal matter in the comment section WILL NOT BE TOLERATED
So, a couple thoughts on this. First of all, I get that Matze is trying to be funny here, but this is not that. All it really does is suggest that he’s been owned by a bunch of trolls posting shit pics. Also, um, contractually, this seems to mean it’s okay to post pictures of other people’s fecal matter. Might want to have a lawyer review this shit, John.
Also, more importantly, I’ve spent a few hours digging through Supreme Court precedents regarding the 1st Amendment and I’ve failed to find the ruling that says that posting a picture of your shit violates the 1st Amendment. I mean, I get that it’s not nice. But, I was assured by Parler that it was ruled by Supreme Court precedent.
Your Username cannot be obscene like “CumDumpster”
Again, my litany of legal scholars failed to turn up the Supreme Court precedent on this.
No pornography. Doesn’t matter who, what, where, when, or in what realm.
Thing is, most pornography is very much protected under the 1st Amendment as interpreted by the Supreme Court of the United States. So again, we see that Parler’s rules are not as initially stated.
We will not allow you to spam other people trying to speak, with unrelated comments like “Fuck you” in every comment. It’s stupid. It’s pointless, Grow up.
I agree that it’s stupid and that people should grow up, but this is the kind of thing that every other internet platform either recognizes from the beginning or learns really quickly: you’re going to have some immature trolls show up and you need to figure out how you want to deal with them. But those spammers’ and trolls’ speech is, again (I feel like I’m repeating myself) very much protected by the 1st Amendment.
You cannot threaten to kill anyone in the comment section. Sorry, never ever going to be okay.
Again, this is very context dependent, and, despite Matze saying that he won’t employ any of those annoying “experts” to determine what is and what is not allowed, figuring out what is a “true threat” under the Supreme Court’s precedent usually requires at least some experts who understand how true threats actually work.
But, honestly, this whole thing is reminiscent of any other website that hosts 3rd party content learning about content moderation. It’s just that in Parler’s case, because it called attention to the claims that it would not moderate, it’s having to go through the learning curve in record time. Remember, in the early days, Twitter called itself “the free speech wing of the free speech party.” And then it started filling with spam, abuse, and harassment. And terrorists. And things got tricky. Or, Facebook. As its first content policy person, Dave Willner, said at a conference a few years ago, Facebook’s original content moderation policy was “does it make us feel icky?” And if it did, it got taken down. But that doesn’t work.
And, of course, as these platforms became bigger and more powerful, the challenges became thornier and more and more complicated. A few years ago, Breitbart went on an extended rampage because Google had created an internal document struggling over the biggest issues in content moderation, in which it included a line about “the good censor”. For months afterwards, all of the Trumpist/Breitbart crew was screaming about “the good censor” and how tech believed its job was to censor conservatives (which is not what the document actually said). It was just an analysis of all the varied challenges in content moderation, and how to set up policies that are fair and reasonable.
Parler seems to be going through this growth process in the course of about a week. First it was “hey free speech for everyone.” Then they suddenly start realizing that that doesn’t actually work — especially when people start trolling you. So, they start ad libbing. Right now, Parler’s policy seems more akin to Facebook’s “does it make us feel icky” standard, though tuned more towards its current base: so “does upset the Trumpists who are now celebrating the platform.” That’s a policy. It’s not “we only moderate based on the 1st Amendment.” And it’s not “free speech supportive.” It’s also not scaleable.
So people get banned and perhaps for good reason. Here’s the single message that got Ed Bott banned:
I don’t see how that violates any of the so far stated rules of Parler, but it’s violating one of the many unwritten rules: Parler doesn’t like it if you make fun of Parler. Which is that company’s choice of course. I will note, just in passing, that that is significantly more restrictive than Twitter, which has tons of people mocking Twitter every damn day, and I’ve yet to hear of a single case of anyone being banned from Twitter for mocking Twitter. Honestly, if you were to compare the two sites, one could make a very strong case that Twitter is way more willing to host speech than Parler is considering its current policies.
Should Parler ever actually grow bigger, it might follow the path of every other social media platform out there and institute more thorough rules, policies, and procedures regarding content moderation. But, of course, that makes it just like every other social media platform out there, though it might draw the lines differently. And, as I’ve said through all these posts (contrary to the attacks that have been launched at me the last few days), I’m very happy that Parler exists. I want there to be more competition to today’s social media sites. I want there to be more experimentation. And I’m truly hopeful that some of them succeed. That’s how innovation works.
I just don’t like it when they’re totally hypocritical. Indeed, it seems that Parler’s CEO Matze has now decided that rather than being supportive of the 1st Amendment, and rather than being supportive of what you can say on a NY street (say, in a protest of police brutality), anyone who supports Antifa is not allowed on Parler:
I’m not quite clear on what Parler policy (or 1st Amendment exception) “Antifa supporter” falls under, but hey, I don’t make the rules.
In the meantime, it’s been fun to watch Parler’s small group of rabid supporters try to continue to justify the site’s misleading claims. A bunch keep screaming at me the falsehood that Parler supports any 1st Amendment protected free speech. Others insist that “of course” that doesn’t apply to assholes (the famed “asshole corollary” to Supreme Court 1A doctrine, I guess). But, honestly, my favorite was this former Fox News reporter who now writes for Mediaite — who spent a couple days insisting that everyone making fun of Parler’s hypocrisy were somehow “mad” at being kicked off Parler — who decided to just straight up say that Parler is good because it does the right kind of banning. You see, Parler is the good censor:
And, thus, we’re right back to “the good censor.” Except that when the Google document used that phrase, it used it to discuss the impossible tradeoffs of moderation, not to embrace the role. Yet here, a Parler fan is embracing this role that is entirely opposite of the site’s public statements. Somehow, I get the feeling that the Breitbart/Trumpist crew isn’t going to react the same way to Parler becoming “the good censor” as it did to a Google document that just highlighted the impossible challenges of content moderation.
And, look, if Parler had come out and said that from the beginning, cool. That’s a choice. No one would be pointing out any hypocrisy if they just said that they wanted to create a safe space for aggrieved Trump fans. Instead, the site is trying to have it both ways: still claiming it’s supportive of 1st Amendment rules, while simultaneously ramping up its somewhat arbitrary banning process. Of course, what’s hilarious is that many of its supporters keep insisting that their real complaint with Twitter is that its content moderation is “arbitrary” or “unevenly applied.” The fact that the same thing is now true of Parler seems blocked from entering their brains by the great cosmic cognitive dissonance shield.
The only issue that people are pointing out is that Parler shouldn’t have been so cavalier in hanging its entire identity on “we don’t moderate, except as required by law.” And hopefully it’s a lesson to other platforms as well. Content moderation happens. You can’t avoid it. Pretending that you can brush it off with vague platitudes about free speech doesn’t work. And it’s better to understand that from the beginning rather than look as foolish as Parler just as everyone’s attention turns your way.
Filed Under: arbitrary, content moderation, content moderation at scale, john matze, rules, terms, the good censor
Companies: parler