New York Passes Ridiculous, Likely Unconstitutional, Bill Requiring Websites To Have ‘Hateful Conduct’ Policies (original) (raw)
from the i-hate-this dept
Okay, so this bill is nowhere near as bad as the Texas and Florida bills, or a number of other bills out there about content moderation. But that doesn’t mean it’s still not pretty damn bad. New York has passed a variation of a content moderation bill in that state that requires websites to have a “hateful conduct policy.”
The entire bill is pretty short and sweet, but the basics are what I said above. It has a very broadly defined hateful conduct definition:
“Hateful conduct” means the use of a social media network to vilify, humiliate, or incite violence against a group or a class of persons on the basis of race, color, religion, ethnicity, national origin, disability, sex, sexual orientation, gender identity or gender expression.
Okay, so first off, that’s pretty broad, but also most of that speech is (whether you like it or not) protected under the 1st Amendment. Requiring websites to put in place editorial policies regarding 1st Amendment protected speech raises… 1st Amendment concerns, even if it is left up to the websites what those policies are.
Also, the drafters of this law are trying to pull a fast one on people. By calling it “hateful conduct” rather than “hateful speech,” they’re trying to avoid the 1st Amendment issue that is obviously a problem with this bill. You can regulate conduct but you can’t regulate speech. Here, the bill tries to pretend it’s regulating conduct, but when you read the definition, you realize it’s only talking about speech.
So, yes, in theory you can abide by this bill by putting in place a “hateful conduct” policy that says “we love hateful conduct, we allow it.” But, obviously, the intent of this bill is to use the requirements here to pressure companies into removing speech that is likely protected under the 1st Amendment. That’s… an issue.
Also, given that the definition is somewhat arbitrary, what’s to stop future legislators from expanding the definition. We’ve already seen efforts in many places to make speaking negatively about the cops into “hate speech.”
Next, the law applies to “social media networks” but here, again, the definition is incredibly broad:
“Social media network” means service providers, which, for profit-making purposes, operate internet platforms that are designed to enable users to share any content with other users or to make such content available to the public.
There appear to be no size qualifications whatsoever. So, one could certainly read this law to mean that Techdirt is a “social media network” under the law, and we may be required to create a “hateful conduct” policy for the site or face a fine. But, the moderation that takes place in the comments is not policy driven. It’s community driven. So, requiring a policy makes no sense at all.
And now that’s also a big issue. Because if we’re required to create a policy, and we do so, but it’s our community that decides what’s appropriate, that means that the community might not agree with the policy, and might not follow what’s in the policy. What happens then? Are we subject to consumer protection fines for having a “misleading” policy?
At the very least, New York State pretty much just guaranteed that small sites like ours need to find and pay a lawyer in New York to tell us what we can do to avoid liability.
Do I want hateful conduct on the site? No. But we’ve created ways of dealing with it that don’t require a legally binding “hateful conduct” policy. And it’s absolutely ridiculous (and just totally disconnected from how the world works) to think that forcing websites to have a “hateful conduct” policy will suddenly make sites more aware of hateful conduct.
The whole thing is political theater, disconnected from the actual realities of running a website.
And that’s made especially clear by the next section:
A social media network that conducts business in the state, shall provide and maintain a clear and easily accessible mechanism for individual users to report incidents of hateful conduct. Such mechanism shall be clearly accessible to users of such network and easily accessed from both a social media networks’ application and website, and shall allow the social media network to provide a direct response to any individual reporting hateful conduct informing them of how the matter is being handled.
So, now every website has to build in special reporting mechanisms, that might not match with how their site actually works? We have the ability to fill out a form and alert us to things, but we also allow people to submit those reports anonymously. As far as I can tell, we might not be able to do that under this law, because we have to be able to “provide a direct response” to anyone who reports information to us. But how do we do that if they don’t give us their contact info? Do we need to build in a whole separate messaging tool?
Each social media network shall have a clear and concise policy readily available and accessible on their website and application which includes how such social media network will respond and address the reports of incidents of hateful conduct on their platform.
Again, this makes an implicit, and false, assumption that every website that hosts user content works off of policies. That’s not how it always works.
The drafters of this bill then try to save it from constitutional problems by pinky swearing that nothing in it limits rights.
Nothing in this section shall be construed (a) as an obligation imposed on a social media network that adversely affects the rights or freedoms of any persons, such as exercising the right of free speech pursuant to the first amendment to the United States Constitution, or (b) to add to or increase liability of a social media network for anything other than the failure to provide a mechanism for a user to report to the social media network any incidents of hateful conduct on their platform and to receive a response on such report.
I mean, sure, great, but the only reason to have a law like this is as a weak attempt to force companies to take down 1st Amendment protected speech. But then you add in something like this to pretend that’s not what you’re really doing. Yeah, yeah, sure.
The enforcement of the law is at least somewhat limited. Only the Attorney General can enforce it… but remember, this is in a state where we already have an Attorney General conducting unconstitutional investigations into social media companies, as a blatant deflection from anyone looking to closely at the state’s own failings in stopping a mass shooting. The fines from violating the law are capped at $1,000 per day, which would be nothing for larger companies, but could really hurt smaller ones.
Even if you agree with the general sentiment that websites should do more to remove hateful speech on their sites, that still should make you very concerned about this bill. Because if states like NY can require this of websites, other states can require other kinds of policies, and other concepts to be put in place regarding content moderation.
Filed Under: content moderation, editorial discretion, free speech, hate speech, hate speech policy, hateful conduct, hateful conduct policy, new york