hateful conduct – Techdirt (original) (raw)

Twitter Briefly Pretended To Take A Stand Against Hate, But Then Elon Admitted It Was All A Mistake (Or A Marketing Campaign?)

from the every-wrong-ove dept

Back when I wrote the blog post detailing the basic content moderation learning curve speedrun, I actually thought that, like most sites that go through it, Elon might actually learn from it. Yet, it appears he still has trouble processing lessons from basically any of the mistakes he makes. Or he seems to be trying to leverage his own nonsense into helping his friends.

Yesterday was a weird one on Twitter, and that’s saying something given how weird and pointless the site has been of late.

On Thursday morning, the CEO of a nonsense peddling website who doesn’t deserve to be named, took to Twitter to whine that Twitter was suppressing “conservative” speech. Apparently, that website had worked out a deal with Twitter to host a very silly excuse for a documentary that serves no purpose other than to push forward a hateful, harassing culture war. The documentary came out last year, and got exactly the kind of bad attention its creators wanted, which is why we see no reason to name it here either. If you don’t know what it is, trust me, it’s exactly the kind of nonsense you think it is, focused on driving mockery and hatred towards people based on their identity.

As part of Elon’s big new push to host video (which has resulted in lots of infringing movies uploaded to the site, and a surprising lack of lawsuits from Hollywood so far), Twitter and the nonsense peddling website had agreed to post the full documentary to Twitter, with some unclear promises of promotion. However, after the team at Twitter viewed a screener of the movie, they told the nonsense peddler that while the film could still be hosted on Twitter, they would limit its reach while labeling it (accurately) as “hateful conduct.”

To some extent, this was bound to happen. Remember, so much of this mess today is because a bunch of Trumpist crybabies insisted that basic moderation was ideological “censorship” of conservatives, even though actual studies showed that Twitter went out of their way to promote conservatives over others, and to let them avoid punishment for breaking the rules. But the Trumpist crew must, at all times, play the snowflake victim. They have no actual policy principles, so all they have is “these other people are trying to oppress us” despite that not being even remotely true. Hell, the whole movie at issue here is more of that very same thing. The underlying premise is that because some people ask you to treat them with respect, “the libs” are trying to oppress you. It’s nonsense.

Either way, there was, just briefly, this moment where it looked like maybe Twitter staff recognized that posting such whiny, hate-inspiring content wasn’t good for business. After all, just last month, the company had updated its “Hateful Conduct policy” which still includes rules against promoting “hostility and malice against others” based on a number of categories, including “gender identity.” And the policy makes it clear that this includes video content as well.

As such, it’s not hard to see how the film in question would violate that policy.

Of course, that was until the boss found out what was going on… and then made it clear he disagreed with the decision.

Elon’s quote is as follows:

This was a mistake by many people at Twitter. It is definitely allowed.

Whether or not you agree with using someone’s preferred pronouns, not doing so is at most rude and certainly breaks no laws.

I should note that I do personally use someone’s preferred pronouns, just as I use someone’s preferred name, simply from the standpoint of good manners.

However, for the same reason, I object to rude behavior, ostracism or threats of violence if the wrong pronoun or name is used.

While he’s correct that it does not violate any laws in the US (in some countries it might), Twitter’s written policy says nothing at all about content needing to break the law to get visibility filtering.

And, again, remember that Musk himself keeps talking about “freedom of speech, not freedom of reach” and the company has said repeatedly that it will limit the visibility of content they believe violates their policy. And it appears that’s exactly what was happening here. The trust & safety team (what little is left of it) determined that this film violated the policies on promoting hostility and malice towards people for their gender identity, and, in response, allowed the film to still be posted on Twitter, but with limited reach.

All of that is clearly within Twitter’s stated policies under Elon Musk (all of those policies have been updated in the last two months under Musk).

So I’m not at all clear how Musk can be claiming that this was a “mistake.” Part of the problem is that he seems to think (incorrectly) that Twitter said the film wasn’t allowed at all, rather than just visibility filtered. But then… he basically says it shouldn’t be filtered either. Because someone pointed out that when a clip from the film was posted to Twitter, it had a label about visibility filtering, saying that the content may violate Twitter’s rules against Hateful Conduct, and Elon said it was “being fixed.”

But then things got even odder. After first claiming it was a mistake and was “being fixed,” a little while later he seemed to double back again and admit that the original designation was correct, and that it would be “advertising-restricted” which would “impact reach to some degree.”

A little later, after another nonsense peddler whined that the film was still being visibility filtered, Elon said that “we’re updating the system tomorrow so that those who follow” the nonsense peddler website “will see this in their feed, but it won’t be recommended to non-followers (nor will any advertising be associated with it).

Which, uh, sounds exactly like what the nonsense peddler website was told originally, and which Elon had originally said “was a mistake by many people at Twitter,” despite it (1) clearly following the policies that Elon himself had previously agreed on and (2) matching his claimed “freedom of speech, not freedom of reach” concept. So, which is it? Is it just Elon talking out of both sides of his mouth yet again?

Or… the alternative, which some people are suggesting: Elon thinks that pretending to “suppress” this film would drive more views of it. Which seems to be supported by him claiming that “The Streisand Effect on this will set an all-time record!”

As the person who coined the Streisand Effect in the first place, I can assure you, this is not how any of this works. But either way the whole thing is stupid (and also why we’re not naming the film or the website, because if this is all a stupid attempt to create a fake Streisand Effect, there’s no reason we should help).

And, either way, this morning Elon insisted that all the visibility filtering had been lifted and the only limitation would be whether or not advertising would appear next to it:

He later tweeted a direct link to the film itself, promoting a tweet from the nonsense peddling website insisting (little fucking snowflakes that they are) that it’s the film “they don’t want you to see.”

Basically, a manufactured martyrdom controversy, combined with Twitter pretending to stand up to encouraging hatred, only for Musk to double down that hate has a comfy, welcoming home on Twitter.

Of course, in the midst of all this, the news came out that Ella Irwin, who had been leading trust & safety since relatively early in the Elon Musk reign, and who had been on Twitter through Wednesday directly responding to trust & safety requests, had resigned and was no longer at the company. It’s unclear if her resignation had anything to do with this mess, but the timing does seem notable.

Still, given all of this, is it really any wonder that advertisers like Ben & Jerry’s have announced that they’re ending all paid advertising on the site in response to the proliferation of hate speech?

Filed Under: content moderation, elon musk, hateful conduct, visibility filtering, what is a woman
Companies: daily wire, twitter

NY’s ‘Hateful Conduct’ Social Media Law Blocked As Unconstitutional

from the some-good-news dept

Last summer, we wrote about New York’s law to require websites to have “hateful conduct” policies, noting that it was “ridiculous” and “likely unconstitutional.” The law was passed in the wake of the horrific Buffalo super market shooting, where the state’s Governor and Attorney General sought to blame the internet, rather than the government’s own failings that contributed to the death toll.

While we noted the law wasn’t quite as bad as some other state laws, it was very problematic, in that it was pretty clearly trying to force websites to pull down content even if it was constitutionally protected speech. Some people argued back that since the law didn’t really require anything other than having a policy and some transparency, that it would pass muster.

Thankfully, though, the first court to take a look has agreed with me, and granted an injunction barring the law from taking effect over constitutional concerns. The ruling is… really good, and really clear.

With the well-intentioned goal of providing the public with clear policies and mechanisms to facilitate reporting hate speech on social media, the New York State legislature enacted N.Y. Gen. Bus. Law § 394-ccc (“the Hateful Conduct Law” or “the law”). Yet, the First Amendment protects from state regulation speech that may be deemed “hateful” and generally disfavors regulation of speech based on its content unless it is narrowly tailored to serve a compelling governmental interest. The Hateful Conduct Law both compels social media networks to speak about the contours of hate speech and chills the constitutionally protected speech of social media users, without articulating a compelling governmental interest or ensuring that the law is narrowly tailored to that goal. In the face of our national commitment to the free expression of speech, even where that speech is offensive or repugnant, Plaintiffs’ motion for preliminary injunction, prohibiting enforcement of the law, is GRANTED.

The ruling then digs into the details, and notes that the requirement for a hateful conduct policy is compelling speech, which is a problem under the 1st Amendment:

Plaintiffs argue that the law regulates the content of their speech by compelling them to speak on an issue on which they would otherwise remain silent. (Pl.’s Mem., ECF No. 9 at 12; Tr., ECF No. 27 at 47:5–13.) Defendant argues that the law regulates conduct, as opposed to speech, because there is no requirement for how a social media network must respond to any complaints and because the law does not even require the network to specifically respond to a complaint of hateful content. (Def.’s Opp’n, ECF No. 21 at 9.) Instead, the law merely requires that the complaint mechanism allows the network to respond, if that is the social media network’s policy. (Tr., ECF No. 27 at 11:25–1212:4.)

Defendant likens the Hateful Conduct Law to the regulation upheld in Restaurant Law Ctr. v. City of New York, which required fast-food employers to set up a mechanism for their employees to donate a portion of their paychecks to a non-profit of that employee’s choosing. 360 F. Supp. 3d 192 (S.D.N.Y. 2019). The court found that this did not constitute “speech”—nor did it constitute “compelled speech”—noting that the “ministerial act” of administering payroll deductions on behalf of their employees did not constitute speech for the employers. Id. at 214. As such, the court applied rational basis review and found that the regulation passed muster. Id. at 221.

However, those facts are not applicable here. The Hateful Conduct Law does not merely require that a social media network provide its users with a mechanism to complain about instances of “hateful conduct”. The law also requires that a social media network must make a “policy” available on its website which details how the network will respond to a complaint of hateful content. In other words, the law requires that social media networks devise and implement a written policy—i.e., speech.

Furthermore, the court notes that the law more or less demands a specific kind of “hateful conduct” policy.

Similarly, the Hateful Conduct Law requires a social media network to endorse the state’s message about “hateful conduct”. To be in compliance with the law’s requirements, a social media network must make a “concise policy readily available and accessible on their website and application” detailing how the network will “respond and address the reports of incidents of hateful conduct on their platform.” N.Y. Gen. Bus. Law § 394-ccc(3). Implicit in this language is that each social media network’s definition of “hateful conduct” must be at least as inclusive as the definition set forth in the law itself. In other words, the social media network’s policy must define “hateful conduct” as conduct which tends to “vilify, humiliate, or incite violence” “on the basis of race, color, religion, ethnicity, national origin, disability, sex, sexual orientation, gender identity or gender expression.” N.Y. Gen. Bus. Law § 394-ccc(1)(a). A social media network that devises its own definition of “hateful conduct” would risk being in violation of the law and thus subject to its enforcement provision.

It’s good to see a court recognize that compelled speech is a 1st Amendment problem.

There are other problems as well that will create real chilling effects on speech:

The potential chilling effect to social media users is exacerbated by the indefiniteness of some of the Hateful Conduct Law’s key terms. It is not clear what the terms like “vilify” and “humiliate” mean for the purposes of the law. While it is true that there are readily accessible dictionary definitions of those words, the law does not define what type of “conduct” or “speech” could be encapsulated by them. For example, could a post using the hashtag “BlackLivesMatter” or “BlueLivesMatter” be considered “hateful conduct” under the law? Likewise, could social media posts expressing anti-American views be considered conduct that humiliates or vilifies a group based on national origin? It is not clear from the face of the text, and thus the law does not put social media users on notice of what kinds of speech or content is now the target of government regulation.

Last year, we had Prof. Eric Goldman on our podcast to discuss how many lawmakers (and some courts…) were insisting that the “Zauderer test” meant that it was okay to mandate transparency on social media policies. Both the 11th Circuit and the 5th Circuit‘s ruling in the Florida and Texas social media bills actually found the transparency requirements to be okay based on Zauderer. However, Goldman has argued (compellingly!) that both courts are simply misreading the Zauderer standard, which was limited to transparency around advertising, and only required transparency of “purely factual information” that was “uncontroversial” and for the purpose of preventing consumer deception.

All of that suggests that the Zauderer test should not and could not apply to laws mandating social media content moderation policy transparency.

Thankfully, it appears that this court in NY agrees, rejecting the attempts by the state to argue that because this is “commercial speech,” the law is fine. Not so, says the court:

The policy disclosure at issue here does not constitute commercial speech and conveys more than a “purely factual and uncontroversial” message. The law’s requirement that Plaintiffs publish their policies explaining how they intend to respond to hateful content on their websites does not simply “propose a commercial transaction”. Nor is the policy requirement “related solely to the economic interests of the speaker and its audience.” Rather, the policy requirement compels a social media network to speak about the range of protected speech it will allow its users to engage (or not engage) in. Plaintiffs operate websites that are directly engaged in the proliferation of speech—Volokh operates a legal blog, whereas Rumble and Locals operate platforms where users post video content and comment on other users’ videos.

Goldman wrote a detailed post on this ruling as well and notes the importance of how the court handles Zauderer:

The court’s categorical rejection of Zauderer highlights how Zauderer evangelists are using the precedent to normalize/justify censorship. This is why the Supreme Court needs to grant cert in the Florida and Texas cases. Ideally the Supreme Court will reiterate that Zauderer is a niche exception of limited applicability that does not include mandatory editorial transparency. Once Zauderer is off the table and legislatures are facing strict scrutiny for their mandated disclosures, I expect they will redirect their censorial impulses elsewhere.

Anyway, it’s good to see a clear rejection of this law. Hopefully we see more of that (and that this ruling stands on the inevitable appeal).

Filed Under: 1st amendment, compelled speech, eugene volokh, free speech, hateful conduct, new york, social media, transparency, zauderer
Companies: locals, rumble

New York Passes Ridiculous, Likely Unconstitutional, Bill Requiring Websites To Have ‘Hateful Conduct’ Policies

from the i-hate-this dept

Okay, so this bill is nowhere near as bad as the Texas and Florida bills, or a number of other bills out there about content moderation. But that doesn’t mean it’s still not pretty damn bad. New York has passed a variation of a content moderation bill in that state that requires websites to have a “hateful conduct policy.”

The entire bill is pretty short and sweet, but the basics are what I said above. It has a very broadly defined hateful conduct definition:

“Hateful conduct” means the use of a social media network to vilify, humiliate, or incite violence against a group or a class of persons on the basis of race, color, religion, ethnicity, national origin, disability, sex, sexual orientation, gender identity or gender expression.

Okay, so first off, that’s pretty broad, but also most of that speech is (whether you like it or not) protected under the 1st Amendment. Requiring websites to put in place editorial policies regarding 1st Amendment protected speech raises… 1st Amendment concerns, even if it is left up to the websites what those policies are.

Also, the drafters of this law are trying to pull a fast one on people. By calling it “hateful conduct” rather than “hateful speech,” they’re trying to avoid the 1st Amendment issue that is obviously a problem with this bill. You can regulate conduct but you can’t regulate speech. Here, the bill tries to pretend it’s regulating conduct, but when you read the definition, you realize it’s only talking about speech.

So, yes, in theory you can abide by this bill by putting in place a “hateful conduct” policy that says “we love hateful conduct, we allow it.” But, obviously, the intent of this bill is to use the requirements here to pressure companies into removing speech that is likely protected under the 1st Amendment. That’s… an issue.

Also, given that the definition is somewhat arbitrary, what’s to stop future legislators from expanding the definition. We’ve already seen efforts in many places to make speaking negatively about the cops into “hate speech.”

Next, the law applies to “social media networks” but here, again, the definition is incredibly broad:

“Social media network” means service providers, which, for profit-making purposes, operate internet platforms that are designed to enable users to share any content with other users or to make such content available to the public.

There appear to be no size qualifications whatsoever. So, one could certainly read this law to mean that Techdirt is a “social media network” under the law, and we may be required to create a “hateful conduct” policy for the site or face a fine. But, the moderation that takes place in the comments is not policy driven. It’s community driven. So, requiring a policy makes no sense at all.

And now that’s also a big issue. Because if we’re required to create a policy, and we do so, but it’s our community that decides what’s appropriate, that means that the community might not agree with the policy, and might not follow what’s in the policy. What happens then? Are we subject to consumer protection fines for having a “misleading” policy?

At the very least, New York State pretty much just guaranteed that small sites like ours need to find and pay a lawyer in New York to tell us what we can do to avoid liability.

Do I want hateful conduct on the site? No. But we’ve created ways of dealing with it that don’t require a legally binding “hateful conduct” policy. And it’s absolutely ridiculous (and just totally disconnected from how the world works) to think that forcing websites to have a “hateful conduct” policy will suddenly make sites more aware of hateful conduct.

The whole thing is political theater, disconnected from the actual realities of running a website.

And that’s made especially clear by the next section:

A social media network that conducts business in the state, shall provide and maintain a clear and easily accessible mechanism for individual users to report incidents of hateful conduct. Such mechanism shall be clearly accessible to users of such network and easily accessed from both a social media networks’ application and website, and shall allow the social media network to provide a direct response to any individual reporting hateful conduct informing them of how the matter is being handled.

So, now every website has to build in special reporting mechanisms, that might not match with how their site actually works? We have the ability to fill out a form and alert us to things, but we also allow people to submit those reports anonymously. As far as I can tell, we might not be able to do that under this law, because we have to be able to “provide a direct response” to anyone who reports information to us. But how do we do that if they don’t give us their contact info? Do we need to build in a whole separate messaging tool?

Each social media network shall have a clear and concise policy readily available and accessible on their website and application which includes how such social media network will respond and address the reports of incidents of hateful conduct on their platform.

Again, this makes an implicit, and false, assumption that every website that hosts user content works off of policies. That’s not how it always works.

The drafters of this bill then try to save it from constitutional problems by pinky swearing that nothing in it limits rights.

Nothing in this section shall be construed (a) as an obligation imposed on a social media network that adversely affects the rights or freedoms of any persons, such as exercising the right of free speech pursuant to the first amendment to the United States Constitution, or (b) to add to or increase liability of a social media network for anything other than the failure to provide a mechanism for a user to report to the social media network any incidents of hateful conduct on their platform and to receive a response on such report.

I mean, sure, great, but the only reason to have a law like this is as a weak attempt to force companies to take down 1st Amendment protected speech. But then you add in something like this to pretend that’s not what you’re really doing. Yeah, yeah, sure.

The enforcement of the law is at least somewhat limited. Only the Attorney General can enforce it… but remember, this is in a state where we already have an Attorney General conducting unconstitutional investigations into social media companies, as a blatant deflection from anyone looking to closely at the state’s own failings in stopping a mass shooting. The fines from violating the law are capped at $1,000 per day, which would be nothing for larger companies, but could really hurt smaller ones.

Even if you agree with the general sentiment that websites should do more to remove hateful speech on their sites, that still should make you very concerned about this bill. Because if states like NY can require this of websites, other states can require other kinds of policies, and other concepts to be put in place regarding content moderation.

Filed Under: content moderation, editorial discretion, free speech, hate speech, hate speech policy, hateful conduct, hateful conduct policy, new york