common carrier – Techdirt (original) (raw)
Sir, This Is A Supreme Court (Not A Wendy’s)
from the thoughts-on-the-weight-of-youtube dept
On Monday, the Supreme Court heard the oral arguments over both Florida and Texas’ social media content moderation laws.
Even though the issues were similar, and the parties challenging both laws (NetChoice and CCIA) were the same (and had the same lawyer, Paul Clement, argue both cases), the laws are somewhat different, and so each was heard separately. The Florida case went first, and the Texas case went after. Roberts even jokingly pretended to be surprised to see Clement again, and Clement kicked off the Texas part by laughing and noting he wouldn’t pretend that they hadn’t all just been in the room debating the Florida law.
If you’d like to listen to the oral arguments, you can listen to Florida’s here and Texas’s here (or if you’d like to hear it while a video shows you who’s talking — since, ridiculously, the Supreme Court still refuses to allow video recordings) C-SPAN has you covered with the Florida arguments and the Texas arguments. You can also read the Florida transcript and the Texas transcript, both embedded below.
You can also read plenty of articles summarizing what happened. I think Professor Eric Goldman’s summary is the most useful (and succinct) of those I’ve seen so far:
Today, the Supreme Court heard oral arguments in First Amendment challenges against the Florida and Texas laws. The laws mostly baffled the justices due to the indeterminacy of who the law reaches and which functions are regulated (justices called the laws “sprawling,” “broad,” and “unspecific”). Because the laws are so complex and baroque, the justices aren’t sure if they can decide now that every aspect of the laws are unconstitutionally infirm. It seemed clear from the justices’ questions that at least some parts are, but the justices also struggled with functionalities at the margins (such as ridesharing or email) that may or may not be within the law’s scope. The court’s opinions will surely contain caveats and hypotheticals that will inspire regulators to make further attempts to censor the Internet, even if the court rules decisively for NetChoice on every issue.
Everyone always wants to ask for predictions after oral arguments, but as always, I think reading the tea leaves from the questions asked during oral arguments is an impossible task. I’ll say that I came out of it ever so slightly optimistic. As Goldman noted, enough of the Justices seemed to recognize that something here was deeply unconstitutional under the First Amendment, though they had some questions regarding how far that took them. And that could lead to a weird (and potentially problematic!) ruling that creates a mess.
To me, what the oral arguments turned up was that there was a clear road to getting this right and some Justices (Kavanaugh, mainly, but others too) seemed to get it. But there were a ton of potholes on that road, and I’m not sure if the lawyer for NetChoice/CCIA did enough to pave over all those potholes to stop at least five justices from tripping over one of them.
I won’t predict beyond that, though. We have a few months to go before we learn how the internet will fare.
However, I did want to call out a few of the arguments that came up that struck me as worth highlighting. First up, as stated above, Kavanaugh seemed to get this pretty clearly, which isn’t a surprise given that his ruling in Halleck five years ago caused me to write this headline: Supreme Court Signals Loud And Clear That Social Media Sites Are Not Public Forums That Have To Allow All Speech.
He came back to these issues multiple times, but here was his opening set of questions to Florida’s Solicitor General:
JUSTICE KAVANAUGH: Can I — can I ask you about a different precedent, about what we said in Buckley? And this picks up on the Chief Justice’s earlier comment about government intervention because of the power of the social media companies. And it seems like, in Buckley, in 1976, in a really important sentence in our First Amendment jurisprudence, we said that “the concept that the government may restrict the speech of some elements of our society in order to enhance the relative voice of others is wholly foreign to the First Amendment.” And that seems to be what you responded with to the Chief Justice.
And then, in Tornillo, the Court went on at great length as well about the power of then newspapers, and the Court said they recognized the argument about vast changes that place in a few hands the power to inform the American people and shape public opinion and that that had led to abuses of bias and manipulation. The Court accepted all that but still said that wasn’t good enough to allow some kind of government-mandated fairness right of reply or anything.
So how do you deal with those two principles?
MR. WHITAKER: Sure, Justice Kavanaugh. First of all, if — if you agree with me with our front-line position that what is being regulated here is conduct, not speech, I don’t think you get into interests and scrutiny and all that. I do think that the law advances the — the First Amendment interests that I mentioned, but I think the — the — the — that interest, the interest that our law is serving, if you did get to a point in the analysis that required consideration of those interests, our interests —
JUSTICE KAVANAUGH: Do you agree then, if speech is involved, that those cases mean that you lose?
MR. WHITAKER: No, I don’t agree with that, and — and the reason I don’t agree with that is because the interests that our law serve are — are legitimate, and it’s — it’s hard because different parts of the law serve different interests. But I think the one that — that sounds in the — in your concern that is most directly implicated would be the hosting requirement applicable to journalistic enterprises.
So one provision of the law says that the platforms cannot censor, shadow ban, or deplatform journalistic enterprises based on the content of their publication or broadcast. And that serves an interest very similar to the interest that this Court recognized as legitimate in Turner when Congress imposed on cable operators a must-carry obligation for broadcasters.
And — and just as a broadcaster — and what the Court said was there was not just a legitimate interest in promoting the free dissemination of ideas through broadcasting, but it was indeed a — a compelling interest, a highly compelling interest. And so I think the journalistic enterprise provision serves a — that very similar issue.
But there are also other interests that our law serves. For example, the consistency provision, Your — Your Honor, is really a consumer protection measure. It — it’s sort of orthogonal to all that. The consistency provision, which is really the heart of our law, just says to the — the platforms: Apply your content moderation policies consistently. Have whatever policies you want, but just apply them consistently.
JUSTICE KAVANAUGH: Could the government apply such a policy to publishing houses and printing presses and movie theaters about what they show? Bookstores, newsstands?
MR. WHITAKER: No, no —
JUSTICE KAVANAUGH: In other words, be consistent in what kinds of content you exclude? Could that be done?
MR. WHITAKER: I — I don’t think so, Your Honor.
JUSTICE KAVANAUGH: And why not?
MR. WHITAKER: Well — well, I think that there is — the consumer — here, the — the social media platforms, their terms of service, their content moderation policies are really part of the terms under which they are offering their service to users. I don’t think that that really — that that paradigm really fits in what Your Honor is — is talking about. So — but I — but, look, we agree, we certainly agree that a newspaper, a book — and a bookstore is engaging in inherently expressive conduct. And our whole point is that these social media platforms are not like those.
That seems like a pretty direct line of questioning and a very weak response from Florida’s SG Whitaker. The bit at the end where he tries to distinguish social media from a newspaper or a book store is just… kind of pathetic?
I also thought that Justice Kagan highlighting the fact that when Elon Musk took over Twitter and changed the rules, some people liked it and some didn’t, which (as our own article by Corbin Barthold pointed out) completely undermines the states’ arguments:
JUSTICE KAGAN: Do you think so as to this — here, this is a real-world example. Twitter users one day woke up and found themselves to be X users and the content rules had changed and their feeds changed, and all of a sudden they were getting a different online newspaper, so to speak, in a metaphorical sense every morning, and a lot of Twitter users thought that was great, and a lot of Twitter users thought that was horrible because, in fact, there were different content judgments being made that was very much affecting the speech environment that they entered every time they opened their app.
Also great was Sotomayor at the very end of the Florida argument (some of her earlier questions struck me as slightly weird) who went pretty strong on the key First Amendment issues:
JUSTICE SOTOMAYOR: I have a problem with laws like this that are so broad that they stifle speech just on their face, meaning I think that’s what the government’s been trying to say.
If you have a particular type of speech that you want to protect against or — or promote, it would be one thing to have that kind of law, but we have a company here, Discourse, who’s also a direct messaging app.
And there’s no question that your law covers them, but they tell us that their whole business model is to promote themselves to a particular message and groups of messages. So they’re not doing it indiscriminately. You’re basically saying to them, if they’re out there and they’re a common carrier, they can’t have this — this kind of business model.
Also fun was when Florida tried to rely on Rumsfeld v. FAIR and Roberts (who wrote that opinion) basically shot down the argument immediately, leading Florida’s SG to try to argue with the guy who wrote the decision that he was interpreting it incorrectly (though he admitted that was probably a mistake while he was doing it):
WHITAKER: But even more broadly than that, I mean, we know that mere — the — the fact that a hosting decision is idealogically charged and causes controversy can’t be the end of the game because I think Rumsfeld versus FAIR would have had to come out the other way then, because, in Rumsfeld, certainly, the law schools there felt very strongly that the military were being bigots and they didn’t want them on campus.
And yet this Court did not look to the idealogical controversy surrounding those decisions. Instead, it looked at objectively whether the law schools were engaged in inherently expressive conduct.
CHIEF JUSTICE ROBERTS: Well, it looked at the fact that the schools were getting money from the federal government and the federal government thought: Well, if they’re going to take our money, they have to allow military recruiters on the campus. I don’t think it has much to do with the issues today at all.
MR. WHITAKER: Well, Mr. Chief Justice, it’s difficult for me to argue with you very much about what Rumsfeld versus FAIR means.
(Laughter.)
MR. WHITAKER: But let me just take a crack because, I mean, I — I think, as I — as I read your opinion for the Court, you didn’t rely, actually, on the funding aspect of the case to reach the conclusion that what was going on there was not First Amendment protected conduct. You were willing to spot them that the — the — the question would be exactly the same if it were a direct regulation of speech as opposed to a funding condition.
Now… for some of the weirder/crazier/more problematic bits.
There were, unfortunately, but not surprisingly, some ridiculous commentary about Section 230. Justice Thomas continues to get the law exactly backwards.
JUSTICE THOMAS: I’ve been fortunate or unfortunate to have been here for most of the development of the Internet.
(Laughter.)
JUSTICE THOMAS: And the argument under Section 230 has been that you’re merely a conduit, which it — exact — that was the case back in the ’90s and perhaps the early 2000s. Now you’re saying that you are engaged in editorial discretion and expressive conduct. Doesn’t that seem to undermine your Section 230 arguments?
Of course, that’s literally exactly backwards. The whole point of 230 was that websites and web forums were not passive conduits. If they were, they wouldn’t need Section 230’s protections from liability when they did moderate. The whole reason that 230 was written in the first place was because internet forums realized they needed to moderate those who violated their rules, and that would be impossible under a Stratton Oakmont v. Prodigy result where anything you left up you became liable for.
Thankfully, the lawyer for the platforms responded correctly:
MR. CLEMENT: With respect, Justice Thomas, I mean, obviously, you were here for all of it. I wasn’t here for all of it. But my understanding is that my clients have consistently taken the position that they are not mere conduits. And Congress, in passing Section 230, looked at some common law cases that basically said, well, if you’re just a pure conduit, that means that you’re free from liability. But, if you start becoming a publisher, by keeping some bad conduct out — content out, then you no longer have that common law liability protection.
And as I understand 230, the whole point of it was to encourage websites and other regulated parties to essentially exercise editorial discretion to keep some of that bad stuff out of there, and as a result, what Congress said is — they didn’t say: And you’re still a conduit if you do that. No, it said: You shouldn’t be treated as a publisher, because Congress recognized that what my clients were doing would, in another context, look like publishing, which would come with the kind of traditional defamation liability, and they wanted to protect them against that precisely to encourage them to take down some of the bad material that, if these laws go into effect, we’d be forced to convey on our websites.
Ridiculously, a while later on, Thomas basically went right back to the same question:
JUSTICE THOMAS: Could you again explain to me why, if you win here, it does not present a Section 230 problem for you?
There was a lot more back and forth here and it’s not at all clear to me Thomas understands Section 230 even the tiniest amount. Which is… problematic. Especially as he’s been briefed on it quite a bit during last year’s Gonzalez case (and he even seemed to suggest he understood some of that in the Taamneh ruling which he wrote). Did he just… forget all of that?
Gorsuch also seemed to get weird on 230 at times, including suggesting (incorrectly) that the argument the platforms were making was inconsistent with their argument on 230.
JUSTICE GORSUCH: — if they’re not — if the — if the expression of the user is theirs because they curate it, where does that leave Section 230? Because the protection there, as I understood it — and Justice Thomas was making this point — was that Section 230 says we’re not going to treat you as publishers so long as you are not — it’s not your communication in whole or in part is what the definition says. And if it’s now their communication in part, do they lose their 230 protections?
He asked that question to the U.S. Solicitor General, Elizabeth Prelogar (who was very good throughout), who was there to argue mostly against the states, but for a narrower ruling that the companies wanted. Her response was to (politely) explain to Gorsuch why he was mixing up different kinds of things. In the follow-up exchange, Gorsuch made a complete nonsense comment that 230 turns companies into common carriers. Again, it does no such thing.
GENERAL PRELOGAR: No, because I think it’s important to distinguish between two different types of speech. There are the individual user posts on these platforms, and that’s what 230 says that the platforms can’t be held liable for.
The kind of speech that we think is protected here under the First Amendment is not each individual post of the user but, instead, the way that the platform shapes that expression by compiling it, exercising this kind of filtering function, choosing to exclude none of the those things above —
JUSTICE GORSUCH: Let me interrupt you there, I’m sorry, but — but I understand it’s not their communication in whole, but it’s — why isn’t it their communication in part if it — if it’s part of this larger mosaic of editorialized discretion and the whole feel of the website?
GENERAL PRELOGAR: Well, I don’t think that there is any basic incompatibility with immunizing them as a matter of Congress’s statutory choices and recognizing that they retain First Amendment protection —
JUSTICE GORSUCH: Isn’t the whole premise — I’m sorry —
GENERAL PRELOGAR: — for the First Amendment —
JUSTICE GORSUCH: — the whole premise of Section 230 that they are common carriers, that — that they’re not going to be held liable in part because it isn’t their expression, they are a conduit for somebody else?
GENERAL PRELOGAR: No, not at all, Justice Gorsuch. I think, you know, to the extent that the states are trying to argue that Section 230 reflects the judgment that the platforms aren’t publishing and speaking here, there would have been no need to enact Section 230 if that were the case.
Congress specifically recognized the platforms are creating a speech product. They are literally, factually publishers. And Congress wanted to grant them immunity. And it was for the purpose of encouraging this kind of editorial discretion. That’s the whole point of the good samaritan blocking provision, 230(c)(2)(A).
There were two more weird moments that are getting a fair bit of attention. The first was, I presume, the very first “Sir, this is a Wendy’s” moment in Supreme Court history. Except… it makes no sense. It wasn’t used (as some imagine) as a hilarious rebuttal to an off-topic rant. It was in a weird, slightly off-topic rant by Texas’ Solicitor General in response to Kavanaugh asking him how the restriction against “viewpoint discrimination” would apply to terrorist content.
Texas’s SG (for fairly obvious reasons) had no good answer and just started to ramble on, somewhat aimlessly about terrorism, and then about Orwell (who came up a few times — though here, he doesn’t really discuss Orwell beyond naming him) and then saying he originally felt the opposite as he does now about this very case, and then suddenly rambling about infrastructure, then back to Orwell, and then… just throws in a reference to the “Sir, this is a Wendy’s” meme, seemingly expecting the Justices to know what it was. Reports from in the room tell me that the Justices stared blankly at the reference (apparently they’re not as online as the rest of us), and then finally he was rescued by Justice Jackson asking a different question.
I’m posting the whole thing for the sheer cringe of it all:
JUSTICE KAVANAUGH: So when — that last clause, they can’t do it on a viewpoint basis, how does that work with terrorist speech?
MR. NIELSON: Sure. So it’s hard to say with terrorist speech because you’d have to pick the category, but assume that it is, you know, Al-Qaeda. You can’t — you could — you can’t very well say you can have the, you know, anti-Al-Qaeda but not the pro-Al-Qaeda. If you just want to say no one’s talking about Al Qaeda here, they can turn that off.
And then the last point, this is at the very end of the game, so you’ve gone through all of those things, all you have left are voluntary people wanting to talk to each other. And, I mean, people say horrible things on the telephone, and that’s — and I don’t think we’ve ever thought, well, you know what, we’re going to turn — we’re going to turn that off because we don’t want the telephone providers to be able to say — have that sort of right to — to censor.
If I may, I mean, with some hesitance, I want to talk about Orwell a little bit, and I say that with some hesitance. But my reaction coming to this case was very similar to yours. I looked at this and I’m like: Wait a minute. These are companies. They have their own rights. We don’t generally think of censorship as something from the — from private people. That’s the government.
Here’s how I came around on this. Maybe it’ll persuade you. Maybe it won’t. I came around on this to say this is something further up the food chain than that ordinary level of political discourse. This is just the type of infrastructure necessary to have any kind of discourse at all. That’s why I keep going back to the telegraph.
This isn’t, you know, the — the level of discourse where they’re making the content decisions that we make our decisions based on. This is the infrastructure that we need to have any sort of discourse at all.
So, if we say we want to have that type of infrastructure not have, you know, censorship on it, that would mean we would have to have a rapid — a massively increased federal government because it would have to control all the infrastructure. And then we would have, okay, now you can’t discriminate based on this kind of infrastructure of how things work.
That’s not — I mean, that is Orwell, right? So, for me, the answer is, for these kind of things like telephones or telegraphs or voluntary communications on the next big telephone/telegraph machine, those kind of private communications have to be able to exist somewhere. You know, the expression like, you know, sir, this is a Wendy’s. There has to be some sort of way where we can allow people to communicate —
JUSTICE JACKSON: And is that just because of the — the modern public square?
I’ve read this so many times now, and I have no idea how we got from “how does that work with terrorist speech” to “sir this is a Wendy’s.” The leading theory I’ve seen online is that the SG had a bet going with some friends that he could slip that line into an argument. But I’d like to believe that’s too stupid to be true.
It’s possible he was using it as an example to say that people want places to sound off and to express themselves, as epitomized by that meme. That’s the most generous version of it I can come up with.
But… it’s silly even in that context. Because having governments like Texas force all websites to host basically all content doesn’t help with the “sir, this is a Wendy’s” situation, as it now makes every site a place where everyone can filibuster nonsense all the time, and the sites can’t do anything about it.
But, still, it’s kinda hilarious that this meme has made it to SCOTUS.
The other moment that’s getting a lot of attention for being preposterously stupid is Alito asking how much YouTube would weigh if it were a newspaper.
JUSTICE ALITO: I mean, if your — if — let’s say YouTube were a newspaper, how much would it weigh?
And, look, it is a dumb question, though not for the reasons most people think. A key part of the debate (as we’ve discussed) is which precedent is closest to this case, with a focus being on whether social media is more like a shopping mall (or a telegraph provider) or a newspaper. Because different cases could apply to either. And if (the argument goes) social media is more like a newspaper, then Miami Herald v. Tornillo applies, and the platforms win the case (easily).
Alito has made it quite clear he wants the states to win and wants the platforms to lose. He made little attempt to hide this during the arguments. So when it was his turn to talk, he wanted to attack the idea that social media was more like a newspaper. So here’s the fuller context:
JUSTICE ALITO: So you say this is just like a newspaper, basically. It’s like the Miami Herald. And the states say no, this is like Western Union. It’s like a telegraph company.
And I — I think — I look at this and I say it’s really not like either of those. It’s worlds away from — from both of those. It’s nothing like a newspaper. A newspaper has space limitations, no matter how powerful it is. It doesn’t necessarily have the same power as — as some of your clients. But put that aside.
Newspapers overtly send messages. They typically have an editorial. They may have an editorial 365 days a year or more than one. But that’s not the situation with even the most prominent of your clients. So I don’t know how we could decide this case by saying — by jumping to one side or the other of this case law.
MR. CLEMENT: Well, Justice Alito, let me offer two thoughts. One, this isn’t the first time you’re wrestling with the Internet. You wrestled with it in Reno. You wrestled with in last term in 303 Creative. And I think the gist of those cases is this is more like the newspaper or the parade organizer than it is like a common carrier.
And then as to the cases, whether you think that this is different from a newspaper, I mean, the arguments that you’re pointing to say this is different are the arguments that those cases wrestled with and said didn’t matter.
So I know you know this, but in Tornillo, it — you know, there was all this language about it being a monopolist, and that was in the context of a local political election where if you couldn’t get into the Miami Herald, like, where else were you going to go? And yet, this Court said that didn’t matter. And the — the — also in Tornillo this Court said, yes, face the constraints, there are some, but our decision doesn’t turn on that. And then in Hurley, there’s a lot of language in the — in the Court’s opinion that says, you know, this is not like much of a message and they let some people show up even if they get their, like, the day of, and the only thing they’re doing is, like, excluding this group.
But, of course, the exclusion was the message that they were sending, and it’s the message the state was trying to prohibit. And that’s kind of the same thing here, which is —
JUSTICE ALITO: I mean, if your — if — let’s say YouTube were a newspaper, how much would it weigh?
(Laughter.)
MR. CLEMENT: Well, I mean, it would — it would — it would weigh an enormous amount, which is why, in order to make it useful, there’s actually more editorial discretion going on in these cases than any of — other case that you’ve had before you.
Because, you know, people tend to focus on the — on the users that get knocked off entirely and end up on the cutting room floor, but both these statutes also regulate the way that these social websites — they — they sort of get you down to something that’s actually usable to an individual user.
And, in fact, if you tried to treat these entities like a true common carrier, so first in, first out, just order of, you’d open up one of these websites and it would be gobble-dy-gook. Half of the stuff wouldn’t even be in a language you understood. And even if you controlled for that, you’d get all this garbage you didn’t want.
So, in context, it doesn’t seem quite as “holy shit, was Alito high?” as some people are making it out to be. He’s trying to highlight why social media is different from newspapers, and the dumb idea that sprung to mind was to highlight how much larger social media is than any newspaper.
But it’s still dumb. Because it actively works against the point he thinks he’s making: that we can’t treat social media like a newspaper because it doesn’t have the space limitations of a newspaper. But that wasn’t the reasoning in Tornillo. And, as both Justices Sotomayor and Barrett pointed out during the Florida arguments, whether or not there are space limitations doesn’t much matter because there are “constraints of attention.” Barrett summed it up nicely:
I mean, Justice Sotomayor pointed out that even though there may not be physical space constraints, there are the — the constraints of attention, right? They have to present information to a consumer in some sort of organized way and that there’s a limited enough amount of information that the — the consumer can absorb it.
And don’t all methods of organization reflect some kind of judgment? I mean, could you tell — could Florida enact a law telling bookstores that they have to put everything out by alphabetical order and that they can’t organize or put some things closer to the front of the store that they think, you know, their customers will want to buy?
Even if he thought he was making a point that YouTube is vastly larger than a newspaper, it doesn’t help his underlying argument, because… so what? The size of the venue doesn’t much matter. There’s still editorial discretion happening.
So, rest assured, folks who saw that quote and thought Alito had completely lost his marbles. No such luck. It was just stupid in the more usual sense of Alito misunderstanding the law, not the nature of bits vs. atoms in the storage of information.
Filed Under: brett kavanaugh, clarence thomas, common carrier, content moderation, elena kagan, florida, john roberts, neil gorsuch, paul clement, public accommodation, sam alito, section 230, sonia sotomayor, supreme court, texas
Companies: ccia, netchoice
Ben Franklin Was All About Content Moderation
from the its-your-own-damn-printing-press dept
Well, here’s a weird one. I was going through the various amicus briefs filed in support of the governments of Texas and Florida’s ability to tell websites that they must host speech that violates their rules, and, damn, there are some ridiculous ones (more posts coming on that front soon…). However, one of them — which I’m not even going to bother linking to — had this bizarre passage trying to argue that founding father Ben Franklin supported “common carrier” laws for owners of printing presses.
This… struck me as very odd. I did a search on the quote in the brief, and found it was also quoted on the website of a sketchy nonsense peddling think tank making the same argument. But the whole thing sounded quite silly, so I decided to dig into the full quote (not the partial, extracted version) used by nonsense peddlers trying to pretend that social media can be a common carrier.
If you want to understand all the many, many reasons why it makes no sense to call websites common carriers, I covered that a while ago. The shortest version of the argument, though, is that throughout the history of common carriage, it’s always been about temporary service, most of which is “carrying” something (people, cargo, data) from point A to point B, and then being done with it. Even public access laws are about letting people in for a short period of time.
But, with websites and social media, there’s a hosting aspect — which goes on in perpetuity. And that makes no sense at all for a “common carrier.” You have to allow them to host something… forever? What?
Anyway, let’s get back to Ben Franklin. The quote that’s being passed around, misleadingly, is from his Autobiography which is very much in the public domain these days. And, read in context, it sure sounds like someone who supports the rights of private property owners to refuse to promote and distribute works of people they feel are up to no good:
In the conduct of my newspaper, I carefully excluded all libelling and personal abuse, which is of late years become so disgraceful to our country. Whenever I was solicited to insert anything of that kind, and the writers pleaded, as they generally did, the liberty of the press, and that a newspaper was like a stage-coach, in which any one who would pay had a right to a place, my answer was, that I would print the piece separately if desired, and the author might have as many copies as he pleased to distribute himself, but that I would not take upon me to spread his detraction; and that, having contracted with my subscribers to furnish them with what might be either useful or entertaining, I could not fill their papers with private altercation, in which they had no concern, without doing them manifest injustice. Now, many of our printers make no scruple of gratifying the malice of individuals by false accusations of the fairest characters among ourselves, augmenting animosity even to the producing of duels; and are, moreover, so indiscreet as to print scurrilous reflections on the government of neighboring states, and even on the conduct of our best national allies, which may be attended with the most pernicious consequences. These things I mention as a caution to young printers, and that they may be encouraged not to pollute their presses and disgrace their profession by such infamous practices, but refuse steadily, as they may see by my example that such a course of conduct will not, on the whole, be injurious to their interests.
The ridiculous amicus brief argues that this is Ben Franklin supporting that printers are a “common carrier” who should expect to print whatever people want. But, it’s hard to read that full quote as anything like that at all.
Franklin is clearly stating that printers have no obligation to print whatever customers want, and certainly not to put it next to other content they do support. In fact, he’s suggesting that they should refuse to do so, and actually seems to suggest that “augmenting animosity” through the use of their printing presses is not a noble pursuit.
Indeed, this quote seems to make the very point that websites are making in this case: that when it’s your printing press you get to decide what you print, what you distribute, and how. The states’ argument is literally the reverse of this. They think they can force the printing presses (websites) to not just print whatever speech the government wants them to print, but also to host it in perpetuity.
And that’s true even if (or, in the case of Texas and Florida, especially if) the intent of that speech is “personal abuse” and “augmenting animosity.”
But any actual reading of Franklin in context suggests he wishes printers chose “not to pollute their presses and disgrace their profession by such infamous practices.” Instead, he suggests they moderate — that they “refuse steadily.”
It sure sounds like Ben Franklin would support the right of private websites to choose to do what they wanted with their own printing presses, and never to be mandated by law to have to be forced into such “infamous practices.”
Filed Under: 1st amendment, common carriage, common carrier, florida, printing press, texas
In Internet Speech Cases, SCOTUS Should Stick Up For Reno v. ACLU
from the scotus-should-remember-it-protected-free-speech-online dept
It was by no means certain that the internet would enjoy full First Amendment protection. The radio is not shielded from the government in that way. Nor is broadcast television. Both Congress and the President supported placing online speech under some degree of state control. In Reno v. ACLU (1997), however, the Supreme Court could find “no basis for qualifying the level of First Amendment scrutiny that should be applied to this [new] medium.” Liberty won out.
A quarter-century later, the free internet faces an array of new threats. Sometimes the danger is announced openly and without regret. Discussing his intention to sign a law restricting minors’ access to social media, the governor of Utah recently declared Reno “wrongly decided.” There are “new facts,” he tells us. He earns points for candor. Most opponents of internet freedom attempt to hide what they’re doing. Some of these aspiring regulators even try to snatch the banner of free speech for themselves. But they all want, by hook or by crook, to curtail or evade Reno.
Many states chafe at the restraints Reno places on the government. A few have already arrived at the Supreme Court. These states endorse legal theories that would drastically shrink _Reno_’s scope. But they do not want Reno narrowed in a neutral, even-handed fashion. For the states in question stand on opposite sides of our nation’s culture war. Each side’s message is this: Limit Reno for thee, but not for me. Each side wants the Justices to revoke _Reno_’s protection for the other side.
Yet both sides appeal to the same legal principles. Each side makes arguments in its own litigation that, if accepted in the other side’s litigation, would blow up in its face. Each side makes arguments that, if given full play, could lead to _Reno_’s being destroyed for everyone. The two sides risk pulling the temple down on our heads.
The cases in question are 303 Creative v. Elenis, Moody v. NetChoice, and NetChoice v. Paxton. In 303 Creative, Colorado seeks to compel a Christian website designer to express a message, in the form of a website for a gay wedding, to which she objects. The U.S. Court of Appeals for the Tenth Circuit ruled for the state. The Supreme Court granted review and heard oral argument last December. In Moody and Paxton, states seek to force large social media platforms to spread messages that those platforms believe are dangerous, harmful, or abhorrent. In Moody, the Eleventh Circuit ruled for the platforms, blocking a Florida law called SB7072. In Paxton, the Fifth Circuit ruled against them, upholding a Texas law, HB20, that requires “viewpoint neutral” content moderation (i.e., if you carry Holocaust documentaries, you must carry Holocaust deniers). Petitions for certiorari have been filed in both cases, and the Court is almost certain to grant at least one of them.
The driving forces here are Colorado (supported by other blue states and the federal government) and Florida and Texas (supported by other red states). Still, each side has found able champions on the bench. Judges figure prominently in these legal debates, as we will see. Yet the Supreme Court now has the full picture. With both 303 Creative and Moody/Paxton before them, a majority of the Justices might take a different view. They might see that the best course is to defend the rule and spirit of Reno against all comers.
How is Reno being challenged? How do the attacks on it match up in 303 Creative, Moody, and Paxton? Let’s dig in.
Common Carrier / Place of Public Accommodation
Two years back, Justice Thomas, writing for himself, suggested that “some digital platforms” are “akin to common carriers or places of public accommodation.” If that’s right, he surmised, then “laws that restrict” those platforms’ “right to exclude” might satisfy the First Amendment. The state might lawfully force such entities to disseminate speech against their will.
Upholding HB20 in Paxton, Judge Oldham took the next step. Texas claimed that large social media platforms can be treated like common carriers. Oldham agreed. He concluded—in dicta; no other judge joined this part of his opinion—that HB20’s viewpoint neutrality rule “falls comfortably within the historical ambit of permissible common carrier regulation.”
The idea of common carriage has, Oldham wrote, “been part of Anglo-American law for more than half a millennium.” He explored the concept’s history at length, following it on a “long technological march” from “ferries and bakeries,” to “steamboats and stagecoaches,” to “telegraph and telephone lines,” and finally—in his mind—to “social media platforms.” He argued “the centrality of the Platforms to public discourse.” He grappled with “modern precedents.” He engaged with the “counterarguments” of “the Platforms and their amici.” No one can dispute his rigor.
The Eleventh Circuit, speaking through Judge Newsom, ruled in Moody that the platforms are not like common carriers. Newsom, too, was careful and thorough. But in any event, how much of this debate is genuinely relevant? Judge Southwick’s answer, in his dissent in Paxton, was short and to the point. “Few of the cases cited” by Judge Oldham, Southwick wrote, “concern the intersection of common carrier obligations and First Amendment rights,” and the ones that do “reinforce the idea [that] common carriers retain their First Amendment protections of their own speech.” To show that a legal principle can trump a constitutional right, in other words, it does not suffice to show that the principle has an impressive pedigree. One must establish that the principle has in fact been used to trump the constitutional right.
Here is where things get interesting. This is precisely the approach that Lorie Smith, the Christian website designer, urges the Supreme Court to deploy in 303 Creative. Colorado says that Smith must make websites for gay weddings because her business is a place of public accommodation. What must Colorado do to connect its premise and its conclusion? It must prove, Smith contends, that “public-accommodation laws historically compelled speech, not that they merely existed.” At oral argument, Justice Thomas picked up this line of thought. Is there a “long tradition,” he asked (appearing to depart from the stance he teased with two years ago), “of public accommodations laws applying to speech . . . or expressive conduct?”
Where are the cases showing that, by declaring an entity a common carrier, the state can strip that entity of its right to decide what speech it will (or will not) disseminate to the public at large? Judge Oldham cited none. Where are the cases showing that, by declaring an entity a place of public accommodation, the state can force that entity to create expressive products against its will? In response to Justice Thomas’s question, Colorado’s counsel conceded that “the historical record is sparse.”
Would conservatives be glad to see Smith forced to design websites that go against her religious convictions? Would liberals rejoice at seeing social media platforms forced to host and amplify hate speech? If the answer to these questions is no, perhaps neither side should start down this path. Perhaps neither should be trying to use common carrier or public accommodation rules to evade Reno and control the internet.
Market Power
As support for the common carrier argument, Judge Oldham asserted the major social media platforms’ market power. “Each Platform has an effective monopoly,” he insisted, “over its particular niche of online discourse.” In his view, “sports ‘influencers’ need access to Instagram,” “political pundits need access to Twitter,” and so on.
There are a number of problems with this claim. To begin with, an entity that wins itself market power does not lose its right to free speech. In Miami Herald v. Tornillo (1973), it was argued that “debate on public issues” was at that time “open only to a monopoly in control of the press.” The Court did not disagree. Nonetheless, it unanimously struck down a state law requiring newspapers to let political candidates reply to negative coverage. “Press responsibility is not mandated by the Constitution,” the Justices explained, “and like many other virtues it cannot be legislated.”
Even if market power mattered, it is far from obvious that platforms have “effective monopolies,” whether over “niches” or otherwise. A month after the Fifth Circuit issued Paxton, Elon Musk purchased Twitter, causing more than a few commentators to ditch the service for Mastodon. Influencers—and, for that matter, political pundits—can gain a large following on Snapchat, TikTok (for now), YouTube, or Rumble. More broadly, the overlap among social media products is greater than might appear at first blush. Suing to break up Facebook and Instagram, for instance, the Federal Trade Commission has asserted that the products’ common parent, Meta, dominates a market for “personal social networking services.” The only large competitor in this market, the agency alleges, is Snapchat. Yet the agency has struggled to explain what makes this market distinct. These days, in fact, Meta is scrambling to make its products more like TikTok.
So the worst thing about the “effective monopol[ies]” claim is that it bounces off the surface. The typical antitrust case is a complex dispute about costs and outputs, profit margins and elasticities, and much else besides. Judge Oldham offered a bare assertion. A just-so story. A useful belief, if one’s goal is to let states commandeer the biggest social media platforms.
No one would cry for those platforms if the judiciary were to overestimate the size and stability of their market “niches.” Indeed, many will smile at the prospect. But be careful what you wish for.
Recall that the Tenth Circuit ruled against Lorie Smith in 303 Creative. Smith’s “custom and unique services,” the court wrote, “are inherently not fungible.” They are, “by definition, unavailable elsewhere.” Smith is therefore a market of one, the court thought, and that is grounds for forcing her to speak. Outlandish? Probably so. Then again, Colorado warns that if Smith wins, belief-based restrictions on service might proliferate, leading to market foreclosure in the aggregate. And that argument is not ridiculous; it is merely speculative and weak—not unlike the “effective monopol[ies]” argument in Paxton.
Anyone tempted to use loose pronouncements of market power as a weapon of (culture) war should first picture how the tactic might be misused in a variety of other cases. One careless claim of market power begets another.
Speech vs. Conduct
On the way to upholding HB20, the Fifth Circuit relied heavily on Rumsfeld v. FAIR (2006). A federal statute required law schools to host military recruiters on pain of losing government funding. FAIR upheld this mandate. “A law school’s decision to allow recruiters on campus,” the Court reasoned, “is not inherently expressive.” The statute regulated “conduct, not speech.” It affected “what law schools must _do_—afford equal access to military recruiters—not what they may or may not say.”
The Fifth Circuit used FAIR as a guide. The “targeted denial of access to only military recruiters,” the court said, could not be distinguished from the “viewpoint-based” content moderation “regulated by HB 20.” In both cases, the court concluded, the regulated activity is “conduct” that lacks “inherent expressiveness.” Therefore social media platforms have no First Amendment right to control what speech they host.
This, it turns out, is a popular way to justify letting the state regulate speech. In 303 Creative, the Biden administration filed a brief in support of Colorado. Colorado’s public accommodations law “target[s] conduct,” the brief says, invoking FAIR, and it “impose[s]” only “‘incidental’ burdens on expression.” The brief cites FAIR more than two dozen times.
FAIR was authored by Chief Justice Roberts. At the oral argument in 303 Creative, he did not seem thrilled about how the decision was thrown back at him. That case involved “providing rooms,” he protested, and the Court held merely that “empty rooms don’t speak.”
The Chief Justice is on to something. Here again, the best move is not to play. Conservatives and liberals can come up with creative ways selectively to apply FAIR to this or that (but no other!) form of online speech. They can try to exploit the decision with callous craft, expecting, for some reason, that the gambit will work always in favor of their interests, and never against them. Or they can put FAIR down and affirm Reno for all.
Editorial Discretion
Which brings us to the most aggressive, and the most dangerous, of the attacks on Reno. Included within the First Amendment is a right to editorial discretion. This is why the government generally cannot tell a newspaper which articles or letters to publish, or a parade which marchers to allow, or a television channel which movies to carry. As the Eleventh Circuit said in Moody, it is why social media services are “constitutionally protected” when “they moderate and curate the content that they disseminate on their platforms.”
In Paxton, the Fifth Circuit swept this right aside. “Editorial discretion,” the court proclaimed, is not “a freestanding category of constitutionally protected speech.”
In their petition for certiorari, the platforms’ representatives cast serious doubt on this claim. They quote the Supreme Court’s discussion, across various decisions, of the “exercise [of] editorial discretion over . . . speech and speakers,” of the “editorial function” as being “itself” an “aspect of ‘speech,’” and of the right of “editorial discretion in the selection and presentation” of content. As they observe, the Fifth Circuit “essentially limited th[e] Court’s editorial discretion cases to their facts.”
That’s true—but hold on. Let us return, one last time, to 303 Creative. At argument, Justice Sotomayor sounded remarkably like Judge Oldham. “Show me where,” on the website, “it’s your message,” she asked Smith’s counsel. “How is this your story? It’s [the couple’s] story.” Counsel responded with—the right to editorial discretion. “Every page” on the website is Smith’s “message,” counsel said, “just as in a newspaper that posts an op-ed written by someone else.” Sotomayor did not seem impressed.
We must again ask whether the states would welcome consistent application of their legal principles. If Colorado successfully compels Smith to speak in 303 Creative, will it accept that it has strengthened Florida’s and Texas’s hand in Moody and Paxton? Would Florida and Texas be willing to remove the platforms’ right to editorial discretion at the price of nixing many Christian artists’ right to such discretion as well? A state could duck the question by dreaming up new and clever ways to distinguish the cases. Yes, of course. Other, very different states could do the same. That is the problem.
The Court has called for the views of the Solicitor General in Moody and Paxton. The Biden administration will be tempted to try to thread the needle. To get cute. To argue that the red-state social media laws before the Court are toxic and scary and unconstitutional, but that the blue-state social media laws in the works are beneficial and enlightened and in perfect harmony with the First Amendment.
The Solicitor General should resist the urge to make everything come out right (from a liberal perspective). Here is what she should do instead. Agree that review is warranted. Denounce SB7072 and HB20. Celebrate the right to editorial discretion. Heap praise on Reno v. ACLU. Stop.
Filed Under: 1st amendment, 303 creative v. elenis, colorado, common carrier, compelled speech, florida, free speech, internet, moody v. netchoice, netchoice v. paxton, public accommodation, reno v. aclu, supreme court, texas
Donald Trump Tells The Supreme Court That Social Media Is A Common Carrier; Never Mentions His Own Social Media Site
from the did-he-forget-what-he-owns? dept
Last month, Florida officially asked the Supreme Court to review the detailed 11th Circuit ruling which mostly upheld the district court ruling saying that Florida’s social media content moderation law was unconstitutional under the 1st Amendment. Earlier this week, NetChoice and CCIA argued that the 11th Circuit was (mostly) correct in trashing the law, but asking the Supreme Court to hear the case anyway to establish that these kinds of laws are clearly unconstitutional.
Although the Eleventh Circuit correctly condemned S.B. 7072’s core provisions, respondents nonetheless agree with Florida that this Court should grant review. The issues at stake are profoundly important, as this Court already recognized in vacating a stay of a preliminary injunction with respect to a similar Texas law. And the Fifth Circuit recently upheld that Texas law (over a vigorous dissent), thus creating a square and acknowledged circuit split. Other states, moreover, are waiting in the wings, ready to enact comparable laws that would fundamentally reshape social media websites by fiat if this Court does not step in now. The best way to put an end to this grave threat to First Amendment values is to grant both this petition and respondents’ crosspetition to consider the constitutionality of S.B. 7072 in its entirety and to bring a swift nationwide resolution to this debate.
Over the last few days, there have been a small flurry of filings from amici, and there will be more coming as well. Not all of the amicus filings are all that interesting, though a few are eye-opening. A bunch of Republican-led states are (unsurprisingly) eagerly arguing that the Supreme Court needs to give them the power to force any website to host Nazi and terrorist speech in the name of “the free exchange of ideas” which apparently no longer recognizes the private property owner’s right not to host messages they disagree with. That this position is exactly opposite of the one many of these same officials have taken when it comes to putting messages on cakes apparently does not much matter.
Speaking of totally hypocritical arguments, it caught my eye that one of the amicus briefs comes from Donald Trump himself. Now, given that he’s the owner of his very own social media website, Truth Social, which regularly engages in totally arbitrary viewpoint discrimination, I wondered if perhaps he might actually argue that websites need to have the freedom to moderate as they see fit.
Except… that’s not at all what he does. Perhaps incredibly, the amicus brief does not even mention that Trump owns his own social media platform. The statement of interest only talks about how he currently has lawsuits against Twitter, Meta, and YouTube for banning him. Those lawsuits aren’t going too well, and Trump seems to think that a law like Florida’s might fix that. That he owns his own competing platform apparently doesn’t even merit a single mention.
Amicus Donald J. Trump, 45th President of the United States, is the lead plaintiff in class action lawsuits filed against Twitter, Inc., Meta Platforms, Inc., and YouTube, LLC. Among the causes of action alleged in these cases are violations of the censorshipdisclosure requirements of Fla. Stat. § 501.2041(2)(a) (“Section (2)(a)” or “(2)(a)”) and the consistent-application requirements of Fla. Stat. § 501.2041(2)(b) (“Section (2)(b)” or “(2)(b)”). Sections (2)(a) and (2)(b) were enacted by the Florida Legislature as part of Senate Bill 7072 (“S.B. 7072”). The decision of the Eleventh Circuit in NetChoice, LLC v. AG, Florida, 34 F.4th 1196 (11th Cir. 2022) (“NetChoice”) directly affects both Sections (2)(a) and (2)(b). NetChoice reviewed a district court’s order enjoining governmental enforcement of S.B. 7072. The Eleventh Circuit vacated the district court’s injunction as to Section (2)(a)’s disclosure requirements but affirmed the injunction as to Section (2)(b)’s consistency requirement. Amicus Trump has a direct interest in upholding these statutes and submits this brief to apprise the Court that Sections (2)(a) and (2)(b) are supported by long-standing common-law principles prohibiting unfair discrimination by common carriers.
The crux of Trump’s argument: these sites are all common carriers and cannot discriminate against him for inciting a violent insurrection attempt. There’s so much in this filing that seems likely to come back to bite Trump and Truth Social in cases that might eventually get filed against himself. For example, he insists that social media sites are all “dumb pipes.” Someone might want to tell Devin Nunes that according to his boss, he’s not supposed to keep banning people who make fun of Donald Trump.
The conflict between NetChoice and Paxton hinges on their different approaches to the primary function of social media platforms. In NetChoice, the Eleventh Circuit erroneously concluded that “social-media platforms aren’t ‘dumb pipes’: They’re not just servers and hard drives storing information or hosting blogs that anyone can access . . . when a user visits Facebook or Twitter, for instance, she sees a curated and edited compilation of content.” NetChoice, 34 F.4th at 1204. Conversely, Paxton correctly recognized that Platforms are in many ways just that, “dumb pipes,” because they “permit any user who agrees to their boilerplate terms of service to communicate on any topic, at any time, and for any reason.” Paxton, 49 F.4th at 461.
Then there’s a long and confused section regarding Section 230 arguing that it is a “special privilege” given by the federal government. The brief argues that the immunity given by 230 is unique in that “newspapers and television stations get no such protection.” Hilariously, though, to prove this, Trump uses two failed defamation lawsuits against news organizations:
By enacting Section 230, Congress wanted “to promote the continued development of the Internet.” 47 U.S.C. § 230(b)(1). This immunity is unique to the publishing industry; newspapers and television stations get no such protection and are plagued by costly and burdensome lawsuits. See, e.g., Palin v. N.Y. Times Co., 2022 WL 599271 (S.D.N.Y. 2022) (defamation lawsuit by Sarah Palin against the New York Times); Sandmann v. WP Company, LLC, 401 F. Supp. 3d 781 (E.D. Ky. 2019) (defamation lawsuit brought by Covington Catholic High School student Nicholas Sandmann against the Washington Post).
Those seem like… very odd choices as examples. After all, Palin’s lawsuit against the NY Times failed, as did Sandmann’s lawsuits against a variety of media orgs, though it is true that before the judge tossed out the nearly identical lawsuits against the NY Times, CBS, ABC, Gannett, and Rolling Stone, the Washington Post did settle. But these are actually good examples of why Section 230 is really just about making sure that you have a credible lawsuit against the correct party, and therefore the only real benefit to them is getting rid of frivolous lawsuits faster. That’s hardly a “special privilege.” Also, this argument by Trump and the examples he chooses fail to recognize that the difference between the Palin and Sandmann cases and Section 230 cases is about who is doing the speaking. In the cases he references, the speaking was done by people employed by the media companies. The whole point of 230 is that we don’t hold third party speakers liable for what they post on… sites like Truth Social.
From there, Trump’s brief goes on a long rambling rant about common carriers — an issue that really isn’t directly relevant, given that neither the 5th or 11th Circuits agreed on common carrier language (in the 5th Circuit, Judge Andy Oldham did argue in favor of it, but that section was signed only by himself; he couldn’t even get a second judge on the panel to agree with him about common carriers).
Still, Trump leans hard on the idea that social media is obviously a common carrier. The argument is somewhat muddled (perhaps unsurprisingly). It appears to argue (1) that Section 230 is a special privilege, and (2) that special privilege makes it a common carrier, and, therefore, (3) the state can require it to act in certain ways in terms of barring it from blocking certain speech. It leans heavily on weird precedents from centuries ago that definitely do not apply:
In the late 1800s, courts held that special privileges such as the grant of eminent domain powers and gifts of public land converted railroads from purely private concerns to common carriers. These special privileges were first bestowed in the early 1860s, and by easing access to land they played an essential role in the completion of the transcontinental railroad in 1869. By comparison to one-time gifts and eminent domain powers, the immunities of Section 230 are far more valuable. While it would have taken time, the private sector could have provided the funds needed for the construction of the railroads; contrariwise, only Congress could bestow immunity for defamation and other torts. Furthermore, rather than a one-time gift, Section 230 is, in effect, an annuity
Except… this is confused and wrong. Section 230 does not grant them “immunity for defamation and other torts.” It simply says that they cannot be held liable for the defamation (or other torts) of their users. That’s kind of an important distinction that seems to fly way over the heads of Trump’s lawyers.
Hilariously, his lawyers (perhaps recognizing how absurd it is to make social media websites “common carriers”) insist that even if they’re common law common carriers, that does not make them subject to all of the “regulatory burdens of the Telecommunications Act.” Basically “we think we can declare things common carriers, but we still don’t believe in net neutrality.”
From there, Trump’s lawyers just make up a bunch of nonsense falsely claiming that Section 230 only allows for moderation based on “good faith.” This is wrong, as we’ve explained. Yes, part of Section 230 notes good faith efforts to restrict access to material, but that’s only in (c)(2) of Section 230 and not (c)(1) which is what is most commonly used in court to defend moderation decisions. On top of that, multiple courts have made it clear that sites have a 1st Amendment right of editorial discretion to moderate how they see fit, unrelated to Section 230.
These are all things that Trump — who again I will remind you, owns his own social media website which very much benefits from the procedural benefits of Section 230 — should be supporting for his own good. But… instead, we get this nonsense:
Thus, Congress has limited a Platform’s immunity to “good faith” efforts, and courts have applied this “good faith” standard to support claims alleging anti-competitive behavior…. Moreover, the list of factors that a Platform is free to consider in censoring content is limited by the statutory language. Inclusion of the catch-all category “otherwise objectionable” does not mean it can censor content based on anything it claims to consider “objectionable.” Construing the term broadly enough “to include any or all information or content,” would render the statutory list meaningless and superfluous
That’s hilarious, given how randomly and arbitrarily Truth Social bans people.
Believe it or not, it gets funnier:
Nor does Section 230 protect discrimination based on any other basis unrelated to content—such as point of view, political influence, skin color, marital status, or friendship with the Platforms’ operators
Remember, Truth Social originally had terms of service that flat out said that annoying Truth Social employees would get you banned? While those terms have since been updated, the company does have a habit of banning people who mock Donald Trump.
Perhaps even more hilariously, Trump then argues that even if a website is a common carrier, it still gets to set whatever rules it wants, and can kick people off for violating them. I’m not joking:
NetChoice’s error is illustrated through a recent example involving Delta Air Lines’ carriage policy for big-game trophies. Conservation Force v. Delta Air Lines, Inc., 190 F. Supp. 3d 606 (N.D. Tex. 2016), aff ’d, 682 Fed. Appx. 310 (5th Cir. 2017) (mem.). A passenger claimed that Delta unfairly discriminated against him by refusing to transport his big-game trophy, but the District Court rejected the argument, noting that common carriers were free to set their carriage policies provided they applied them equally to anyone using the service. Id. at 610 (quoting York Co., 70 U.S. at 112). Applying federal common-law principles, the Conservation Force court properly drew the distinction between common-carrier status (which Delta clearly had, by virtue of its “all comers” policy) and the terms of service by which Delta operated its airline (which had to be fair and uniform). The terms passed muster because Delta applied its trophy policy uniformly to all users.
Basically, with absolutely no self-awareness whatsoever, he’s arguing that even if a site is a common carrier, as long as it sets rules, it can still ban people. Which… um… would bring us right back around to where we were (more elegantly and more usefully) with Section 230 as currently interpreted.
Reading between the lines, Trump seems to be arguing (hilariously) that he was kicked off of social media because websites were biased against him in an “unfair” and “arbitrary” manner. At this point, I will once again remind you that a recent study found that Truth Social’s content moderation (and, again, Truth Social is never mentioned once in this entire brief) was recently found to have the most arbitrary content moderation of any platform.
Anyway, of course, none of this really matters. Justices Thomas and Alito have already made it clear that they buy this argument, and this amicus brief is playing to them. I do really wonder, though, if these statements will ever show up in cases against Truth Social for its moderation.
Filed Under: common carrier, content moderation, donald trump, florida, section 230, social media, supreme court
Companies: netchoice, tmtg, truth social
Republicans Sue Google To Try To Force Spam Into Your Inbox
from the spam-spam-spam-spam dept
Okay, let’s get this out of the way first: Republican politicians send a shit-ton of spam. And, no, it’s not just standard political messaging. It’s spam. And it’s often full of absolute scams. Erick Erickson, an extremely rightwing/GOP-supporting commentator, recently wrote a whole post calling out his team for spamming everyone and then blaming others for the problems their spam created.
The consultant class of the GOP is pushing the mythology that Google and Apple are flagging their emails because tech companies hate Republicans. I’ve spent a week on the phone with many Republican consultants, including those tied to campaigns whose emails make it to my inbox. They all tell me the same thing — the problem is not Google or Apple, but the GOP consultant class.
He notes that he never signed up for any of these political campaign lists, but he gets all the mail. As he notes:
These are not examples of Google abusing Republican emails. This is an example of Republican consultants abusing emails they have access to and Google and Apple protecting their users from spam.
Unfortunately, the Republican consultants have the ears of their leaders and their solution is to pressure Google and Apple to let all the spam go through. They are selling Republican elected officials on the idea that Google is nefariously blocking their emails.
The reality is the consultants will not fess up to their abuses. They will not own up to their poor stewardship of email lists. They’ll claim the Democrats are more effective because of tech company biases and not because the Democrats are actually better stewards of an email file.
He even thanks Google and Apple from “sparing” him from all this spam, from the very candidates that he generally supports.
And that’s not even getting into the many other problems Republicans have had with email of late. Studies have shown that the GOP specializes in sending misleading campaign emails using “spam-like senders.“
One email sent by the re-election campaign of Senator Marsha Blackburn (R-TN) lists its sender as “Reservation Confirmation,” with the subject line “FLIGHT NUMBER: 8341.” The message itself states the former president has “invited you to join him for a private dinner at Mar-a-Lago!”
A click to the “Confirm your interest here” button redirects the recipient to a fundraiser offering a chance to win the dinner with Trump. Donations will benefit Blackburn’s campaign, in addition to that of Congresswoman Elise Stefanik (R-NY).
Some emails from the team of Representative Steve Scalise (R-LA) cryptically appear as coming from a sender simply named “Steve,” with the subject line “hey.” Scalise’s fundraising emails have also put down “URGENT RESPONSE REQUIRED” as senders. Scalise is the House Minority Whip Representative.
In an email paid for by the Republican State Leadership Committee, which works to help Republicans gain control of state legislatures, the sender is “Me, Trump State Allies (2)”—appearing to imply a back-and-forth conversation—and the subject line is “re: @realDonaldTrump mentioned YOUR name!”
These emails are separating gullible rubes from their money, including the $250 million that Trump raised, in part with these scammy emails, claiming that it would be used to contest the 2020 election that he lost, when it basically went to Trump and his friends instead. Or how about how many Republican donors demanded their money back, after scammy spammy emails for Trump’s campaign tricked them into making recurring donations.
Mr. Trump’s political operation began opting online donors into automatic recurring contributions by prechecking a box on its digital donation forms to take a withdrawal every week. Donors would have to notice the box and uncheck it to opt out of the donation. A second prechecked box took out another donation, known as a “money bomb.”
The Trump team then obscured that fact by burying the fine print beneath multiple lines of bold and capitalized text, a New York Times investigation earlier this year found.
Then there’s the story of the Republican candidate for Congress who tricked donors with emails that pretended to be from Trump or Ron DeSantis asking for donations, when the donations were actually for himself:
In his pursuit of Florida’s 4th Congressional District, Aguilar has used WinRed, a popular platform Republicans employ to process campaign contributions, to send a flurry of fundraising emails. But the solicitations did not mention Aguilar’s campaign or his leading competitor in the Aug. 23 primary, state Sen. Aaron Bean, who has the support of much of the state’s GOP establishment.
Instead, the messages were written in a way that suggested donations would actually go toward more prominent GOP politicians, including the former president, the governor or Ohio Rep. Jim Jordan.
“Governor DeSantis is always fighting back against Corrupt Left,” read one email that came under a logo using DeSantis’ name. “No matter how bad this country is the Fake News media and Biden Admin are OBSESSED with that [sic] Florida is doing.”
It added: “It is time to help America’s #1 Governor. Can we count on you to support DeSantis?”
They’re spammers.
But, if there’s one other thing we know about Republicans, beyond their desire to spam inboxes, we also know **they can’t take responsibility for when they fuck stuff up.**The party that pretended to be the party of “personal responsibility” has shown over the years that it’s the exact opposite. Everything coming out of Republicans these days is blaming everyone else for the stuff they themselves fucked up.
It’s pathetic.
And, as we’ve been covering, they’ve been doing it for the past six months or so with this email nonsense, and now they’ve taken it to a new, and even more ridiculous level: the Republican National Committee is suing Google because of its spam filter. I mean, what a bunch of whiny little children who can’t admit that they fucked up.
As we’ve discussed, Republican political consultants flipped out about this spam stuff, and never once considered that maybe they were the ones screwing up. They filed an FEC complaint against Google, claiming their spam filter was an unfair in-kind contribution. They introduced a law to try to require all email providers whitelist political spam.
And despite how disingenuous they’ve been throughout this whole thing, Google caved. Google agreed to a pilot program where they would whitelist politicians’ emails from going to spam (though it would ask recipients if they wanted to keep receiving such emails). This pilot program needed approval from the FEC, and it was universally hated by everyone who commented (across the political aisle) because people are sick of political spam.
Yet… last week, we noted that no Republicans have even signed up for the program. As we noted at the time, it seems easier for them to just want to perpetually play the victim rather than make use of the solutions presented to them.
So, on to the lawsuit. They hired the Dhillon Law Group, whose name has been showing up in pretty much every frivolous, pathetic, whiney, “oh, I’m a conservative and I’m a victim” lawsuit we’ve seen over the past couple of years. We were just talking about how they flopped and had the SLAPP suit that they filed on behalf of John Stossel against Facebook tossed out of court. But, now they’re suing Google for… protecting its users with spam filters.
The lawsuit is a complete and utter joke.
It claims that most of their emails get through, except the ones at the very end of the month which go to spam. They insist that this is proof that Google is deliberately targeting them, which… makes no sense at all. It seems more likely that they ramp up their mailings at the end of each month, which trips the algorithm to designate more of their emails as spam. Basically, play shitty spam games, win shitty spam filter prizes.
The argument in the lawsuit is another one favored by idiots insisting that “big tech” discriminates against conservatives: that it’s a violation of California’s anti-discrimination laws, such as the Unruh Act. This has been tried before and failed miserably under Section 230, and that’s likely to happen with this lawsuit as well. It then tries to argue that Google’s email is a “common carrier” and that the spam filter somehow violates common carrier law. This is just utter nonsense.
Email, of course, is an open protocol. There are numerous different providers, and different ways you can set it up. If you don’t like how Google handles its spam filtering, you can pretty easily move to a different provider (or you can… go through your spam folder and tell it what you want to train the filter). If we somehow declared every individual email service provider a common carrier that would be the end of email, because it would make spam filtering effectively illegal. It’s complete nonsense.
The entire point of this complaint is to say that spammers effectively have a fundamental right to flood your inbox. Even if you could make a credible argument that email was a common carrier (and again, you cannot), then the party who should complain is the holder of the inbox who feels that emails they want are being unfairly blocked and not the asshole spammers trying to scam you out your money.
While the complaint heavily cites Judge Andy Oldham’s nonsense ruling in the 5th Circuit, someone should remind the RNC that they filed this case in California, which is… not in the 5th Circuit.
Unfortunately, this is not the first time a communications company has discriminated against people based on their political views and affiliation, but fortunately this means there are laws ready to combat this harm. In the 1800s, a pivotal form of communication was the telegraph and Western Union had a dominate market share across the country. By the late 1800s, “legislators grew ‘concern[ed] about the possibility that the private entities that controlled this amazing new technology would use that power to manipulate the flow of information to the public when doing so served their economic or political self-interest.’” NetChoice, LLC v. Paxton, 49 F.4th 439, 470 (5th Cir. 2022) (opinion of Oldham, J.) (quoting Genevieve Lakier, The Non-First Amendment Law of Freedom of Speech, 134 Harv. L. Rev. 2299, 2321 (2021)).
“These fears proved well-founded.” NetChoice, 49 F.4th at 470. Even though Western Union offered to serve any member of the public, it repeatedly discriminated against messages based on the message’s political views or on the person’s political affiliation. It, for example, “discriminated against certain political speech, like strike-related telegraphs.” Id.; see also Lakier, supra, at 2322. It was also “widely believed that Western Union … ‘influenc[ed] the reporting of political elections in an effort to promote the election of candidates their directors favored.’” NetChoice, 49 F.4th at 470 (quoting Lakier, supra, at 2322); see also The Blaine Men Bluffing, N.Y. Times, Nov. 6, 1884, at 5. And it was not the only time Western Union was accused of discriminating based on political views or affiliation: “Similar accusations were made about Western Union’s role in the presidential contest[] eight years earlier.” Lakier, supra, at 2322 n.114 (citing David Hochfelder, The Telegraph in America, 1832-1920, at 176 (2013)).
In response to these discriminatory practices, states across the country enacted nondiscrimination laws that prohibited businesses from “manipulating the flow of information to the public.” Lakier, supra, at 2322; see also NetChoice, 49 F.4th at 471. One such state was California. It passed laws requiring “common carriers” to timely transmit messages in a nondiscriminatory manner
For what it’s worth, while this lawsuit heavily quotes professor Genevieve Lakier, Lakier herself has gone on record claiming that Oldham misinterpreted her work on the common carrier issue and that it “conveniently ignores” the precedents that disagree with Oldham’s conclusion, of which there are many rejecting the idea of expanding common carrier law to other realms.
The lawsuit goes on, basically demanding that Google not be allowed to filter RNC emails into spam. I’m not joking:
The court should thus make clear that California’s nondiscrimination provisions apply to Google’s Gmail. Whether Google is categorized as a common carrier, public accommodation, or a business providing a service, California law prohibits Google’s spam filtration of RNC emails based on political affiliation and views. To conclude otherwise would mean that “email providers, mobile phone companies, and banks could cancel the accounts of anyone who sends an email, makes a phone call, or spends money in support of a disfavored political party, candidate, or business.”
But here’s the thing: they’re not filtering the spam based on “political affiliation and views.” They’re filtering it because it’s spam. Maybe try not to be such spammers? Or, at the very least, sign up for the stupid pilot whitelist program Google rolled out just for you?
Generally, if you can take action to avoid the supposed “harm” you’re suing over, and you don’t take those actions, then your lawsuit is not going to go very far.
Hilariously, the RNC complaint insists that “the most reasonable inference” is that Google is deliberately trying to stifle Republicans, which is ridiculous. They’re just trying to stop spam. But, here’s where the lawsuit goes even dumber: it says that even if this isn’t based on viewpoint discrimination, Google should still lose… for negligence. That’s… not how any of this works.
It is no answer to say, as Google surely will, that its spam filtering is not intentional. The most reasonable inference is that it is intentional. Regardless, Google’s conduct is at the very least negligent and unreasonable. And California law forbids that too. Common carrier law doesn’t require intentional discrimination. Neither do common law claims like negligent interference with prospective relations. Neither does California’s unfair practices law. In the end, Google has violated the law, cost the RNC numerous donations and substantial revenue, and irreparably injured the RNC’s relationship with its community.
One of the specific claims in the lawsuit is that this is a violation of California’s common carrier law, which is hilarious. Your email inbox is not a common carrier. Then there’s the Unruh claim of discrimination, an unfair competition claim (Google competes with the RNC?), and then (of course) an intentional interference with prospective economic relations. In other words, they’re effectively saying any spammer should be able to sue Google for blocking the spam because it’s stopping gullible suckers from paying up.
It’s nonsense.
There are a few other claims, including a “negligence” claim which is pretty funny. It’s negligent to place your spam in the spam folder? The details of the claim are laughable:
Google thus has a duty to receive emails sent by the RNC, and to transmit them to Gmail users’ inboxes upon reasonable terms.
Google also has a duty to transmit and deliver messages sent by the RNC to Gmail users with great care and diligence.
Google did not transmit the RNC’s emails to its users’ inboxes on reasonable terms, or exercise care and diligence in the transmission and delivery of the RNC’s emails to Gmail users because it has in bad faith, and for no accurate or reasonable reason it can explain, intercepted and diverted the RNC’s emails to Gmail users’ spam folders. Google’s political bias or hostility to the RNC is not a reasonable basis for refusing to transmit the emails to its users’ inbox and, in the alternative, its arbitrary or incompetent failure to deliver the RNC’s emails to Gmail users’ inboxes does not constitute great care and diligence.
Honestly, I still don’t understand why Democrats haven’t been parading all this nonsense in ads and speeches all over the place, calling out that Republicans are demanding that they get to infiltrate your email box and stop your spam filters from working.
Everyone hates spam.
If the Democrats were doing this, Fox News would be having a field day about this kind of ridiculousness.
Republicans used to claim they were the party of personal responsibility. Now they’re the party of “we fucked around, we found out, but now we’re going to blame you for it.” It’s just utterly pathetic.
Filed Under: california, common carrier, discrimination, emails, political emails, spam, unruh act
Companies: google, rnc
Senator Klobuchar’s Latest Bad Idea: Letting Smaller Journalism Outlets Demand Payments For Links
from the this-is-a-bad,-bad-idea dept
Look, I’m a small journalism outfit. A very small one. So, in theory, a law that effectively lets me demand free cash from Google and Facebook should be a good thing for me. But, it would actually be a disaster. That’s why I spoke out against the idea last year when Senator Amy Klobuchar and Rep. David Cicilline first floated the Journalism Competition and Preservation Act (JCPA). Earlier this year, we had a guest post from Library Futures explaining why the JCPA would be lose-lose legislation. In short, it’s a link tax bill, similar to the one written in Australia to appease (and enrich) Rupert Murdoch. It basically says that publishers can band together, with an antitrust exemption, to demand fees from bigger, more successful internet companies.
And, of course, Klobuchar (as is her usual method of operation when pushing bills that fundamentally break the internet) has decided to move forward with it anyway. She recently introduced a new version of the JCPA, with the one major change being that it only applies to smaller news orgs — those with under 1,500 employees. This would leave out the Fox Newses of the world, along with the NY Times, Washington Post, etc. At best, you can say that Klobuchar realized the original bill was just about wealth transfers from big internet companies to big media companies, and carved them out of the deal.
Of course, that also seems like a weird way to set up this bill with potentially catastrophic consequences. We’re at a time when hedge funds — most notably Alden Capital — have been buying up newspapers and laying off tons of people while trying to squeeze cash out of the remaining husks. And, this bill basically says “buy up large newspapers and cut them to under 1,500 employees.” Indeed, remember, the head of Alden not that long ago was writing opeds saying that Google and Facebook should just pay him money. And here’s Amy Klobuchar saying “sure, you get free money **just as long as you fire enough people first.**“
That’s crazy. It’s so crazy that even the Newsguild, which has been supportive of this general concept, is like “hey wait a second, this is going to lead to journalists getting fired.”
On top of that, as soon as you get into declaring which organizations are “journalism” organizations, and which ones get this special benefit from the US government, you’ve entered dangerous 1st Amendment territory. We’ve had this issue in the past with other laws that try to carve out “covered” journalism entities. Part of the 1st Amendment is that the government cannot declare who is and who is not a journalist (otherwise it would be way too tempting to carve out journalists most critical of the government). Yet, this bill spends pages declaring who gets to be considered a journalism organization for the purposes of the law. That’s the first big problem.
But, the much bigger problem is that the bill is trying to break the internet and establish the ability to tax links.
The main function of the bill is to allow news orgs to team up, force internet companies that link to them into mandatory arbitration, and force them to pay the journalism organizations for linking to them. For linking to them. Literally for sending them traffic. The bill says that each side submits their proposal for how much the internet companies should pay the news companies, and then the arbitrator picks one side’s proposal.
But, again, let’s go back to what this is — what the internet companies are being forced to pay for. They are being forced to pay to send other websites traffic. This is ludicrous.
News orgs beg these sites for traffic. They hire SEO people to try to get more traffic. Now they’re also getting to FORCE the internet companies to PAY them for that traffic too?
Some of this may feel hidden within the bill, so let me walk you through the key parts. First, it defines a covered “online platform” as any website, mobile app, or internet services that aggregates or directs users to news articles. That is, any online tool that sends traffic to news sites.
ONLINE PLATFORM.—The term ‘‘online platform’’ means a website, online or mobile application, operating system, digital assistant, or online service that aggregates, displays, provides, distributes, or directs users to news articles, works of journalism, or other content, or portions thereof, generated, created, produced, or owned by eligible digital journalism providers.
To be subject to the mandatory arbitration, such an online platform has to have at least 50 million monthly active users and a market cap over 550billion(hilariously,currentlythiswouldexcludeMeta/Facebook,sinceitsmarketcaphasdroppedatoninthelastfewmonthsandnowsitswellbelow550 billion (hilariously, currently this would exclude Meta/Facebook, since its market cap has dropped a ton in the last few months and now sits well below 550billion(hilariously,currentlythiswouldexcludeMeta/Facebook,sinceitsmarketcaphasdroppedatoninthelastfewmonthsandnowsitswellbelow500 billion). So, at this point, it basically applies to Apple, Microsoft, Google, and Amazon properties.
Those are the companies that will be forced to pay up under this scheme. Then, it allows media orgs (which meet certain definitions included in the bill, including having fewer than 1,500 employees) to team up with one another to form a “joint negotiation entity.”
IN GENERAL.—An eligible digital journalism provider shall provide public notice to announce the opportunity for other eligible digital journalism providers to join a joint negotiation entity for the purpose of engaging in joint negotiations with a covered platform under this section, regarding the terms and conditions by which the covered platform may access the content of the eligible digital journalism providers that are members of the joint negotiation entity.
Okay, so now you’ve got a joint negotiation entity that can negotiate with the four companies listed above. But what the fuck are you negotiating for? The “terms and conditions by which the covered platform may access the content of the eligible digital journalism providers.” Access? What the hell does “access” mean under this law?
¯\_(ツ)_/¯
I mean, it’s not defined anywhere, because why define the most critical part of this bill? The answer is that the drafters know that it’s ridiculous to come out and say that what they really mean is you need to negotiate over how much these companies will pay to link to digital journalism outfits.
But, linking is a fundamental feature and right on the open internet. Setting up any sort of scheme where websites are being forced to pay to link is fundamentally against the nature of the open web. It sets us off down a very dangerous and very slippery slope.
Anyway, once you have this joint negotiation entity, you literally get to demand payment for links (euphemistically called “access”). And if Google or whoever is like “fuck you, it’s a link, we’re sending you traffic already, why should we pay you for already helping you out?” the joint negotiating entity can force the companies into arbitration where each side submits how much they should pay, and the arbitrator has to pick one side (and not anywhere in the middle).
Also, not agreeing to negotiate — again, to pay for something that no one should ever pay for — under this law is deemed as “not conducting negotiations in good faith.”
And how much are the companies supposed to pay for sending you free traffic? The “fair market value” based on “the investment of the digital publisher.” Really.
This whole thing is based on a fundamental lie that you need a license to link. But that’s just not true. Copyright does not cover links. There is no license to link. And yet the bill pretends there is one:
At any point after a notice is sent to the covered platform to initiate joint negotiations under subsection (a)(2), the eligible digital journalism providers that are members of the joint negotiation entity may jointly deny the covered platform access to content licensed or produced by such eligible digital journalism providers.
Deny access? What? That means… deny them the ability to send you traffic? I mean, look, if digital publications don’t want traffic from Google, they can just set that up technically on their site with robots.txt blocking indexing, and then sending any referral traffic from Google into a black hole. But, fundamentally, this bill is just confused about linking. You don’t need a license to link. You don’t need a license for snippets and the headline. That’s fair use.
The really funny thing about this bill is it refuses to admit it’s a copyright bill in disguise. Platforms have fair use rights to post a snippet of news content along with a link, and the link is just a fundamental way in which the internet works. One that this bill is attempting to break.
Also, that section above where, somewhat hilariously, digital publications can magically tell the big online platforms they are denying them “access,” the bill says that the platforms CANNOT JUST REFUSE TO LINK. I only wish I were joking.
No covered platform may retaliate against an eligible digital journalism provider for participating in a negotiation conducted under section 3, or an arbitration conducted under section 4, including by refusing to index content or changing the ranking, identification, modification, branding, or placement of the content of the eligible digital journalism provider on the covered platform.
Congrats, Senator Klobuchar, you’ve just created a must-carry provision for news aggregators. And here’s the best part: the misinfo providers out there can now effectively force their way into Google News by forming one of these joint negotiating entities, and then pointing to this section and saying “Google refuses to index my content.”
Who knew that Amy Klobuchar wanted to force disinfo peddlers into Google News?
Everything — and I do mean everything — about this bill is ridiculous. It’s a bizarre attempt to do an end-run around antitrust law, copyright law, and common carrier law… to force Google, Apple, Amazon, and Microsoft to pay for linking and sending traffic to digital publishers who are too incompetent to figure out how to properly monetize incoming traffic.
I can’t see how anyone thinks this is a good idea. And, again, I run one of the companies that in theory would “benefit” from this nonsense by getting free money.
I used to just think that Senator Klobuchar was ignorant about how the internet worked. But considering how frequently she releases absolutely ridiculous and dangerous bills about the internet, I’m beginning to realize that she is deliberately seeking to destroy it.
Filed Under: aggregators, amy klobuchar, antitrust, common carrier, copyright, jcpa, jouranlism, link tax, must carry, news
This Is Really, Really Dumb: Ohio Court Says Google May Be A Common Carrier
from the it's-not,-stop-it dept
We’ve gone into detail as to why it makes no sense at all, legally or conceptually, to call a website a common carrier. We’ve also explained how conservatives — bizarrely the ones pushing for this, despite decades of claiming that common carrier designations were an affront to all that is good and holy — aren’t going to like it if websites are declared common carriers. And, we just had this fantastic ruling in the 11th Circuit explaining, in clear and direct terms, why websites are not common carriers.
And, yet, now a state court in Ohio has said that Google may just be a common carrier. There’s a lot going on here, but it’s a really dumb ruling by a very confused judge. This is the case that we wrote about a year ago, in which Ohio filed a weird lawsuit that reads like it wants to be an antitrust lawsuit against Google, but focuses on declaring the company to be a “common carrier.” As we noted when that lawsuit came out, most of it was completely nonsensical. Even if Ohio got what it wanted, it still wasn’t clear what it would mean for Google to “not discriminate” against websites, because the entire point of a search engine is to discriminate. It ranks its results and those rankings are a form of discrimination: discriminating against less relevant and useful results in favor of more relevant and useful results.
So, it seemed fairly obvious that this was a garbage lawsuit filed for garbage reasons. And, yet, Judge James Schuck has now allowed it to go forward rather than dismissing it. There’s a lot of nonsense in the ruling, but lets start with what a common carrier means under Ohio law:
Under Ohio law, a common carrier is defined as one who undertakes for hire to transport persons or property, and holds itself out to the public as ready and willing to serve the public indifferently and impartially to the limits of its capacity.
So, um, Google search meets literally none of those conditions. It is transporting neither persons nor property. It does not serve the public indifferently and impartially, because you go to Google in the first place to get Google’s recommendations on how to answer your query. There is no such thing as an impartial search result. That’s not a thing.
But, apparently, the judge here has a different view of the world.
Thus, there must be a “public profession or holding out to serve the public.” … In that regard, the State has alleged that Google’s stated mission is to “organize the world’s information and make it universally accessible and usable.” … A reasonable factfinder could conclude this unsolicited admission by Google, if true, satisfies such a standard. Google’s response–its citation to “Our Approach to Search” from its “How Search Works” webpage–goes beyond the four corners of the State’s Complaint and cannot be considered at this stage of the proceeding.
So, already, this is… bizarre. Saying that you want to organize the world’s information and make it universally available is about for users of the search engine, not for the websites it links to (though, even that shouldn’t much matter), and the state’s complaint is not that Google is blocking users from doing searches… just in how it organizes results.
So, already, the judge is confusing two different parties here. Also, a marketing message about organizing the world’s information is, in no way, a legal admission that it will include ALL information.
It gets worse. Google points out that it’s not a common carrier because it doesn’t carry anything, as I’ve pointed out. The judge shrugs that off:
Herein lies the difficulty in applying 18th century common law to 21st century technology and commerce. In the internet age, information is often as valuable as goods. From telegraph, land-line telephones, cable television, and cellular telephones, the law of what is transported and how it is transported has developed over time. The State has alleged that Google carries information. For purposes of the present posture, the State’s allegations are sufficient.
I mean, what? While it’s true that technology has changed, we also have many, many, many decades of law regarding common carriers for communications. And Google search does not “carry” information under any of those. To simply say that alleging otherwise is enough is ridiculous. And it opens up the ability for the state to effectively harass almost any business by claiming it’s a common carrier.
Then there’s the issue of a common carrier service requiring payment. As Google rightly notes, you don’t pay to use Google search. The state argued this doesn’t matter because you “pay” with “data.” Of course, if that’s accurate, then, you can argue ANYTHING is a pay service, so long as there’s some sort of benefit out of it. That’s nonsense. But, the judge buys it, claiming “an inference may fairly be drawn” that by using Google’s search and providing the company data, you are paying for it.
The judge also notes that direct fees are not necessary for common carriage, pointing to elevators.
To this extent, it appears more recent law has shifted from requiring a direct fee paid to the carrier. A mall does not charge a fee to members of the public who use its escalators. An office complex does not charge a fee to members of the public who use its elevators. An airport does not charge a fee to members of the public who use its terminals. Nonetheless, the availability of these conduits to the general public provides an important ancillary benefit to the owners of the mall, office building, and airport. In return for providing this important service, tenants rent space and perhaps pay more for that space because the landlord is able to provide the tenants’ customers with better and quicker access to the tenants’ spaces. No direct fee is paid to the landlord by the customers, but cases suggest the landlord is still functioning as a common carrier in those situations.
The judge then creates two hypotheticals, one about a version of Uber that does not exist, that was entirely advertising supported, and one about a subway system that is entirely taxpayer supported, noting there would be no direct fee for either, but both could be deemed common carriers. The Uber example seems particularly sketchy, since Uber certainly reserves the right to reject passengers, so I don’t see how the argument applies there. And as for a government provided service, well, that’s an entirely different category of service, so again… not applicable.
But, the judge deems the common carrier argument can move forward. It’s not final, this is just at the motion to dismiss stage where the judge is supposed to take everything the plaintiff pleads as true. So, it could turn around later. But, as a start, this seems ridiculous. This way, the government can basically allege almost any business is a common carrier, and force it to do all sorts of things.
From there, the case shifts to the question of, even if the company is a common carrier, is it a “public utility.” Here, the court rightly concludes that even if it’s a common carrier, Google is clearly not a public utility. It goes through a bunch of reasons why, some of which I think are a bit sketchy, but the main summary:
While it is no doubt a popular service, the public has no legal right to demand a device to search the internet. The lack of regulation means that Google is free to stop providing its search platform whenever it chooses. It could choose to focus on other parts of its company, or–as unlikely as it may seem–go out of business entirely. Google needn’t give notice or reason before doing so….
The public has a legal right to demand or receive electric, gas, water, and solid waste removal…. If the provider of these services were to cease operating, the public would be severely harmed by not having these essential public services. The public would rightly ask what the government would do to fill that void. This is the definition of an essential public service….
While Google Search is inarguably convenient and often used, it does not provide a fundamental life-essential service that the public has a right to demand and receive. Google Search barely existed two decades ago.
And even though Google Search has a 90 percent market share according to the State’s Complaint, were Google Search to cease operating, Google’s competitors, like Bing, Ask, and Duck Go, would undoubtedly fill the void left by Google’s departure. The minimal inconvenience of leaving users to type the web address of a different search engine into their search bars is not equivalent to the significant harm faced by the public if the local water company shuts down its pipes or the local electric company powers down the grid.
So, that’s all correct. But, weighing it against the common carrier analysis is unfortunate. Because it feels like the judge is trying to have it both ways here.
Then the court responds to Google’s (correct!) claim that declaring it a common carrier infringes on its 1st Amendment rights, and again, the 11th Circuit’s recent ruling makes it clear why it does. Unfortunately, Ohio is not in the 11th.
Instead, the court says that simply declaring Google a common carrier does not infringe on the 1st Amendment, as the real issue is what rules the state then requires the common carrier to follow. The court argues those could violate the 1st Amendment… but also, might not. I believe the judge then misstates several key rulings, including the same ones the 11th Circuit just used to invalidate much of Florida’s law, but again, Ohio ain’t in the 11th.
Basically, the court here says that the cases around compelled speech focused on situations where “the host’s message was impacted by the speech it was forced to accommodate,” but argues that isn’t the case with Google search.
There is minimal concern that Google Search’s users will believe that Google Search’s results constitute Google’s own speech. When a user search a speech by former President Donald Trump on Google Search and that speech is retrieved by Google with a link to the speech on YouTube, no rational person would conclude that Google is associating with President Trump or endorsing what is seen in the video.
What a weird hypothetical. And wrong. The issue is not the underlying content. The part is Google’s expression is the search ranking. And people do associate that with Google. If Google thinks this particular search result belongs at the top of a search results page, that’s Google’s expression. The underlying content is something different altogether.
And, um, why are we even using an example of Trump. No one is arguing that Google isn’t linking people to Trump speeches if they search for Trump speeches.
If the State obtains the relief it seeks in this case–an order that Google not self-preference–then any such concern of forced association would be all the more attentuated because the public would know that Google was being forced to host that video.
Huh? This is the most tautological bullshit I’ve seen in a while. It’s okay to compel speech, because once we compel speech people will know it’s compelled speech, and therefore, they’ll know that the host didn’t want that speech? How does that make any sense at all? Under that standard, the government can always compel speech.
And why does the court assume that the entirety of the public will know about this new regulation declaring the company a common carrier? What a weird bit of reasoning.
Either way, the case is not yet over, but it’s a very confused ruling. I don’t know enough about civil procedure rules in Ohio to know if Google can immediately appeal this ridiculousness, or if they have to move forward to discovery and summary judgment, but what a mess.
The standards here are nonsense, and basically open up any business to being declared a common carrier, based on the whims (or more likely, the political grandstanding nonsense) of any government official who wants to create a culture war by blaming a company.
Ohio: do better.
Filed Under: 1st amendment, common carrier, james schuck, ohio, public utility, search, search ranking
Companies: google
11th Circuit Disagrees With The 5th Circuit (But Actually Explains Its Work): Florida’s Social Media Bill Still (Mostly) Unconstitutional
from the boom dept
Well, well. As we still wait to see what the Supreme Court will do about the 5th Circuit’s somewhat bizarre, and reasonless reinstatement of Texas’ ridiculously bad social media content moderation bill, the 11th Circuit has come out with what might be a somewhat rushed decision going mostly in the other direction, and saying that most of Florida’s content moderation bill is, as the lower court said, unconstitutional. It’s worth reading the entire decision, which may take a bit longer than the 5th Circuit’s one sentence reinstatement of the law, as it makes a lot of good points. I still think that the court is missing some important points about the parts of the law that it has reinstated (around transparency), but we’ll have another post on that shortly (and I hope those mistakes may be fixed with more briefing).
As for what the court got right: it tossed the key parts of the law around moderation, saying that those were an easy call as unconstitutional, just like the lower court said. The government cannot mandate how a website handles content moderation. The ruling opens strong:
Not in their wildest dreams could anyone in the Founding generation have imagined Facebook, Twitter, YouTube, or TikTok. But “whatever the challenges of applying the Constitution to ever-advancing technology, the basic principles of freedom of speech and the press, like the First Amendment’s command, do not vary when a new and different medium for communication appears.” Brown v. Ent. Merchs. Ass’n, 564 U.S. 786, 790 (2011) (quotation marks omitted). One of those “basic principles”—indeed, the most basic of the basic—is that “[t]he Free Speech Clause of the First Amendment constrains governmental actors and protects private actors.” Manhattan Cmty. Access Corp. v. Halleck, 139 S. Ct. 1921, 1926 (2019). Put simply, with minor exceptions, the government can’t tell a private person or entity what to say or how to say it.
The court effectively laughs off Florida’s argument that social media is no longer considered a “private actor” and effectively mocks the claims, made by Florida, that “the ‘big tech’ oligarchs in Silicon Valley” are trying to “silence conservative speech in favor of a ‘radical leftist’ agenda.” The 1st Amendment protects companies’ right to moderate how they see fit:
We hold that it is substantially likely that social-media companies—even the biggest ones—are “private actors” whose rights the First Amendment protects, Manhattan Cmty., 139 S. Ct. at 1926, that their so-called “content-moderation” decisions constitute protected exercises of editorial judgment, and that the provisions of the new Florida law that restrict large platforms’ ability to engage in content moderation unconstitutionally burden that prerogative. We further conclude that it is substantially likely that one of the law’s particularly onerous disclosure provisions—which would require covered platforms to provide a “thorough rationale” for each and every content-moderation decision they make—violates the First Amendment. Accordingly, we hold that the companies are entitled to a preliminary injunction prohibiting enforcement of those provisions.
As noted above, the court also does say that there are a few disclosure/transparency provisions that it finds “far less burdensome” are “unlikely” to violate the 1st Amendment, and vacates that part of the lower court ruling. I still think this is incorrect, but, as noted, we’ll explain that part in another post.
For the most part, this is a fantastic ruling, explaining clearly why content moderation is protected by the 1st Amendment. And, because I know that some supporters of Florida in our comments kept insisting that the lower court decision was only because it was a “liberal activist” judge, I’ll note that this ruling was written by Judge Kevin Newsom, who was appointed to the court by Donald Trump (and the other two judges on the panel were also nominated by Republican Presidents).
The ruling kicks off by noting, correctly, that social media is mostly made up of speech by third parties, and also (thankfully!) recognizing that it’s not just the giant sites, but smaller sites as well:
At their core, social-media platforms collect speech created by third parties—typically in the form of written text, photos, and videos, which we’ll collectively call “posts”—and then make that speech available to others, who might be either individuals who have chosen to “follow” the “post”-er or members of the general public. Social-media platforms include both massive websites with billions of users—like Facebook, Twitter, YouTube, and TikTok— and niche sites that cater to smaller audiences based on specific interests or affiliations—like Roblox (a child-oriented gaming network), ProAmericaOnly (a network for conservatives), and Vegan Forum (self-explanatory)
It’s good that they recognize that these kinds of laws impact smaller companies as well.
From there the court makes “three important points”: private websites are not the government, social media is different than a newspaper, and social media are not “dumb pipes” like traditional telecom services:
Three important points about social-media platforms: First—and this would be too obvious to mention if it weren’t so often lost or obscured in political rhetoric—platforms are private enterprises, not governmental (or even quasi-governmental) entities. No one has an obligation to contribute to or consume the content that the platforms make available. And correlatively, while the Constitution protects citizens from governmental efforts to restrict their access to social media, see Packingham v. North Carolina, 137 S. Ct. 1730, 1737 (2017), no one has a vested right to force a platform to allow her to contribute to or consume social-media content.
Second, a social-media platform is different from traditional media outlets in that it doesn’t create most of the original content on its site; the vast majority of “tweets” on Twitter and videos on YouTube, for instance, are created by individual users, not the companies that own and operate Twitter and YouTube. Even so, platforms do engage in some speech of their own: A platform, for example, might publish terms of service or community standards specifying the type of content that it will (and won’t) allow on its site, add addenda or disclaimers to certain posts (say, warning of misinformation or mature content), or publish its own posts.
Third, and relatedly, social-media platforms aren’t “dumb pipes”: They’re not just servers and hard drives storing information or hosting blogs that anyone can access, and they’re not internet service providers reflexively transmitting data from point A to point B. Rather, when a user visits Facebook or Twitter, for instance, she sees a curated and edited compilation of content from the people and organizations that she follows. If she follows 1,000 people and 100 organizations on a particular platform, for instance, her “feed”—for better or worse—won’t just consist of every single post created by every single one of those people and organizations arranged in reverse-chronological order. Rather, the platform will have exercised editorial judgment in two key ways: First, the platform will have removed posts that violate its terms of service or community standards—for instance, those containing hate speech, pornography, or violent content. See, e.g., Doc. 26-1 at 3–6; Facebook Community Standards, Meta, https://transparency.fb.com/policies/community-standards (last accessed May 15, 2022). Second, it will have arranged available content by choosing how to prioritize and display posts—effectively selecting which users’ speech the viewer will see, and in what order, during any given visit to the site.
Each of these points is important and effectively dispenses with much of the nonsense we’ve seen people claim in the past. First, it tosses aside the incorrect and misleading argument that some have read into Packingham’s decision that notes the internet is a “public square.” Here, the judges correctly note that Packingham only stands for the rule that the government cannot restrict their access to social media, and not that it can force private companies to host them.
Also, I love the fact that the court makes the “not a dumb pipe” argument, and even uses the line “reflexively transmitting data from point A to point B.” That’s nearly identical to the language that I’ve used in explaining why it makes no sense to call social media a common carrier.
Next, the court points out, again accurately, that the purpose of a social media website is to act as an “intermediary” between users, but also (and this is important) in crafting different types of online communities, including focusing on niches:
Accordingly, a social-media platform serves as an intermediary between users who have chosen to partake of the service the platform provides and thereby participate in the community it has created. In that way, the platform creates a virtual space in which every user—private individuals, politicians, news organizations, corporations, and advocacy groups—can be both speaker and listener. In playing this role, the platforms invest significant time and resources into editing and organizing—the best word, we think, is curating—users’ posts into collections of content that they then disseminate to others. By engaging in this content moderation, the platforms develop particular market niches, foster different sorts of online communities, and promote various values and viewpoints.
This is also an important point that is regularly ignored or overlooked. It’s the point that the authors of Section 230 have tried to drive home in explaining why they wrote the law in the first place. When they talk about “diversity of political discourse” in the law, they never meant “all on the same site,” but rather giving websites the freedom to cater to different audiences. It’s fantastic that this panel recognizes that fact.
When we get to the meat of the opinion, explaining the decision, the court again makes a bunch of very strong, and very correct points, about the impact of a law like Florida’s.
Social-media platforms like Facebook, Twitter, YouTube, and TikTok are private companies with First Amendment rights, see First Nat’l Bank of Bos. v. Bellotti, 435 U.S. 765, 781–84 (1978), and when they (like other entities) “disclos[e],” “publish[],” or “disseminat[e]” information, they engage in “speech within the meaning of the First Amendment.” Sorrell v. IMS Health Inc., 564 U.S. 552, 570 (2011) (quotation marks omitted). More particularly, when a platform removes or deprioritizes a user or post, it makes a judgment about whether and to what extent it will publish information to its users—a judgment rooted in the platform’s own views about the sorts of content and viewpoints that are valuable and appropriate for dissemination on its site. As the officials who sponsored and signed S.B. 7072 recognized when alleging that “Big Tech” companies harbor a “leftist” bias against “conservative” perspectives, the companies that operate social-media platforms express themselves (for better or worse) through their content-moderation decisions. When a platform selectively removes what it perceives to be incendiary political rhetoric, pornographic content, or public-health misinformation, it conveys a message and thereby engages in “speech” within the meaning of the First Amendment.
Laws that restrict platforms’ ability to speak through content moderation therefore trigger First Amendment scrutiny. Two lines of precedent independently confirm this commonsense conclusion: first, and most obviously, decisions protecting exercises of “editorial judgment”; and second, and separately, those protecting inherently expressive conduct.
The key point here: the court recognizes that content moderation is about “editorial judgment” and, as such, easily gets 1st Amendment protection. It cites case after case holding this, focusing heavily on the ruling in Turner v. FCC. This is actually important, as some people (notably FCC commissioner Brendan Carr) trying to tear down Section 230’s protections have ridiculously tried to argue that the ruling in Turner supports their views. But those people are wrong, as the court clearly notes:
So too, in Turner Broadcasting Systems, Inc. v. FCC, the Court held that cable operators—companies that own cable lines and choose which stations to offer their customers—“engage in and transmit speech.” 512 U.S. at 636. “[B]y exercising editorial discretion over which stations or programs to include in [their] repertoire,” the Court said, they “seek to communicate messages on a wide variety of topics and in a wide variety of formats.” Id. (quotation marks omitted); see also Ark. Educ. TV Comm’n v. Forbes, 523 U.S. 666, 674 (1998) (“Although programming decisions often involve the compilation of the speech of third parties, the decisions nonetheless constitute communicative acts.”). Because cable operators’ decisions about which channels to transmit were protected speech, the challenged regulation requiring operators to carry broadcast-TV channels triggered First Amendment scrutiny
(Just as an aside, this also applies to all the nonsense we’ve heard people claim in trying to argue that OAN can force DirecTV to continue to carry it).
Either way, the court drives home: content moderation is editorial judgment.
Social-media platforms’ content-moderation decisions are, we think, closely analogous to the editorial judgments that the Supreme Court recognized in Miami Herald, Pacific Gas, Turner, and Hurley. Like parade organizers and cable operators, social-media companies are in the business of delivering curated compilations of speech created, in the first instance, by others. Just as the parade organizer exercises editorial judgment when it refuses to include in its lineup groups with whose messages it disagrees, and just as a cable operator might refuse to carry a channel that produces content it prefers not to disseminate, social-media platforms regularly make choices “not to propound a particular point of view.” Hurley, 515 U.S. at 575. Platforms employ editorial judgment to convey some messages but not others and thereby cultivate different types of communities that appeal to different groups. A few examples:
* YouTube seeks to create a “welcoming community for viewers” and, to that end, prohibits a wide range of content, including spam, pornography, terrorist incitement, election and public-health misinformation, and hate speech. * Facebook engages in content moderation to foster “authenticity,” “safety,” “privacy,” and “dignity,” and accordingly, removes or adds warnings to a wide range of content—for example, posts that include what it considers to be hate speech, fraud or deception, nudity or sexual activity, and public-health misinformation * Twitter aims “to ensure all people can participate in the public conversation freely and safely” by removing content, among other categories, that it views as embodying hate, glorifying violence, promoting suicide, or containing election misinformation. * Roblox, a gaming social network primarily for children, prohibits “[s]ingling out a user or group for ridicule or abuse,” any sort of sexual content, depictions of and support for war or violence, and any discussion of political parties or candidates. * Vegan Forum allows non-vegans but “will not tolerate members who promote contrary agendas.”
It also notes that this 1st Amendment right enables forums focused on specific political agendas as well:
And to be clear, some platforms exercise editorial judgment to promote explicitly political agendas. On the right, ProAmericaOnly promises “No Censorship | No Shadow Bans | No BS | NO LIBERALS.” And on the left, The Democratic Hub says that its “online community is for liberals, progressives, moderates, independent[s] and anyone who has a favorable opinion of Democrats and/or liberal political views or is critical of Republican ideology.”
All such decisions about what speech to permit, disseminate, prohibit, and deprioritize—decisions based on platforms’ own particular values and views—fit comfortably within the Supreme Court’s editorial-judgment precedents.
As for Florida’s argument that since most content on social media is not vetted first, there is no editorial judgment in content moderation, the court says that’s obviously incorrect.
With respect, the State’s argument misses the point. The “conduct” that the challenged provisions regulate—what this entire appeal is about—is the platforms’ “censorship” of users’ posts—i.e., the posts that platforms do review and remove or deprioritize. The question, then, is whether that conduct is expressive. For reasons we’ve explained, we think it unquestionably is.
There’s also a good footnote debunking the claim that content moderation isn’t expressive because the rules aren’t intending to “convey a particularized message.” As the court notes, that’s just silly:
To the extent that the states argue that social-media platforms lack the requisite “intent” to convey a message, we find it implausible that platforms would engage in the laborious process of defining detailed community standards, identifying offending content, and removing or deprioritizing that content if they didn’t intend to convey “some sort of message.” Unsurprisingly, the record in this case confirms platforms’ intent to communicate messages through their content-moderation decisions—including that certain material is harmful or unwelcome on their sites. See, e.g., Doc. 25-1 at 2 (declaration of YouTube executive explaining that its approach to content moderation “is to remove content that violates [its] policies (developed with outside experts to prevent real-world harms), reduce the spread of harmful misinformation . . . and raise authoritative and trusted content”); Facebook Community Standards, supra (noting that Facebook moderates content “in service of” its “values” of “authenticity,” “safety,” “privacy,” and “dignity”).
From there, the court digs into the idea that the two favorite cases cited regularly by both Florida and Texas in defense of these laws has any weight here. The two cases are Rumsfeld v. FAIR (regarding military recruitment on a college campus) and Pruneyard v. Robins (regarding a shopping mall where people wanted to hand out petitions). We’ve explained in detail in the past why neither case works here, but we’ll let the 11th Circuit panel handle the details here:
We begin with the “hosting” cases. The first decision to which the State points, PruneYard, is readily distinguishable. There, the Supreme Court affirmed a state court’s decision requiring a privately owned shopping mall to allow members of the public to circulate petitions on its property. 447 U.S. at 76–77, 88. In that case, though, the only First Amendment interest that the mall owner asserted was the right “not to be forced by the State to use [its] property as a forum for the speech of others.” Id. at 85. The Supreme Court’s subsequent decisions in Pacific Gas and Hurley distinguished and cabined PruneYard. The Pacific Gas plurality explained that “[n]otably absent from PruneYard was any concern that access to this area might affect the shopping center owner’s exercise of his own right to speak: the owner did not even allege that he objected to the content of the pamphlets.” 475 U.S. at 12 (plurality op.); see also id. at 24 (Marshall, J., concurring in the judgment) (“While the shopping center owner in PruneYard wished to be free of unwanted expression, he nowhere alleged that his own expression was hindered in the slightest.”); Hurley, 515 U.S. at 580 (noting that the “principle of speaker’s autonomy was simply not threatened in” PruneYard). Because NetChoice asserts that S.B. 7072 interferes with the platforms’ own speech rights by forcing them to carry messages that contradict their community standards and terms of service, PruneYard is inapposite.
Nice, simple, and straightforward. As for Rumsfeld v. FAIR, that is also easily different:
FAIR may be a bit closer, but it, too, is distinguishable. In that case, the Supreme Court upheld a federal statute—the Solomon Amendment—that required law schools, as a condition to receiving federal funding, to allow military recruiters the same access to campuses and students as any other employer. 547 U.S. at 56. The schools, which had restricted recruiters’ access because they opposed the military’s “Don’t Ask, Don’t Tell” policy regarding gay servicemembers, protested that requiring them to host recruiters and post notices on their behalf violated the First Amendment. Id. at 51. But the Court held that the law didn’t implicate the First Amendment because it “neither limit[ed] what law schools may say nor require[d] them to say anything.” Id. at 60. In so holding, the Court rejected two arguments for why the First Amendment should apply—(1) that the Solomon Amendment unconstitutionally required law schools to host the military’s speech, and (2) that it restricted the law schools’ expressive conduct. Id. at 60–61.
[….]
FAIR isn’t controlling here because social-media platforms warrant First Amendment protection on both of the grounds that the Court held that law-school recruiting services didn’t.
First, S.B. 7072 interferes with social-media platforms’ own “speech” within the meaning of the First Amendment. Social-media platforms, unlike law-school recruiting services, are in the business of disseminating curated collections of speech. A social-media platform that “exercises editorial discretion in the selection and presentation of” the content that it disseminates to its users “engages in speech activity.” Ark. Educ. TV Comm’n, 523 U.S. at 674; see Sorrell, 564 U.S. at 570 (explaining that the “dissemination of information” is “speech within the meaning of the First Amendment”); Bartnicki v. Vopper, 532 U.S. 514, 527 (2001) (“If the acts of ‘disclosing’ and ‘publishing’ information do not constitute speech, it is hard to imagine what does fall within that category.” (cleaned up)). Just as the must-carry provisions in Turner “reduce[d] the number of channels over which cable operators exercise[d] unfettered control” and therefore triggered First Amendment scrutiny, 512 U.S. at 637, S.B. 7072’s content-moderation restrictions reduce the number of posts over which platforms can exercise their editorial judgment. Because a social-media platform itself “spe[aks]” by curating and delivering compilations of others’ speech—speech that may include messages ranging from Facebook’s promotion of authenticity, safety, privacy, and dignity to ProAmericaOnly’s “No BS | No LIBERALS”—a law that requires the platform to disseminate speech with which it disagrees interferes with its own message and thereby implicates its First Amendment rights.
Second, social-media platforms are engaged in inherently expressive conduct of the sort that the Court found lacking in FAIR. As we were careful to explain in FLFNB I, FAIR “does not mean that conduct loses its expressive nature just because it is also accompanied by other speech.” 901 F.3d at 1243–44. Rather, “[t]he critical question is whether the explanatory speech is necessary for the reasonable observer to perceive a message from the conduct.” Id. at 1244. And we held that an advocacy organization’sfood-sharing events constituted expressive conduct from which, “due to the context surrounding them, the reasonable observer would infer some sort of message”—even without reference to the words “Food Not Bombs” on the organization’s banners. Id. at 1245. Context, we held, is what differentiates “activity that is sufficiently expressive [from] similar activity that is not”—e.g., “the act of sitting down” from “the sit-in by African Americans at a Louisiana library” protesting segregation. Id. at 1241 (citing Brown v. Louisiana, 383 U.S. 131, 141–42 (1966)).
Unlike the law schools in FAIR, social-media platforms’ content-moderation decisions communicate messages when they remove or “shadow-ban” users or content. Explanatory speech isn’t “necessary for the reasonable observer to perceive a message from,” for instance, a platform’s decision to ban a politician or remove what it perceives to be misinformation. Id. at 1244. Such conduct—the targeted removal of users’ speech from websites whose primary function is to serve as speech platforms—conveys a message to the reasonable observer “due to the context surrounding” it. Id. at 1245; see also Coral Ridge, 6 F.4th at 1254. Given the context, a reasonable observer witnessing a platform remove a user or item of content would infer, at a minimum, a message of disapproval. Thus, social-media platforms engage in content moderation that is inherently expressive notwithstanding FAIR
The court then takes a further hatchet to both FAIR and Pruneyard:
The State asserts that Pruneyard and FAIR—and, for that matter, the Supreme Court’s editorial-judgment decisions—establish three “guiding principles” that should lead us to conclude that S.B. 7072 doesn’t implicate the First Amendment. We disagree.
The first principle—that a regulation must interfere with the host’s ability to speak in order to implicate the First Amendment— does find support in FAIR. See 547 U.S. at 64. Even so, the State’s argument—that S.B. 7072 doesn’t interfere with platforms’ ability to speak because they can still affirmatively dissociate themselves from the content that they disseminate—encounters two difficulties. As an initial matter, in at least one key provision, the Act defines the term “censor” to include “posting an addendum,” i.e., a disclaimer—and thereby explicitly prohibits the very speech by which a platform might dissociate itself from users’ messages. Fla. Stat. § 501.2041(1)(b). Moreover, and more fundamentally, if the exercise of editorial judgment—the decision about whether, to what extent, and in what manner to disseminate third-party content—is itself speech or inherently expressive conduct, which we have said it is, then the Act does interfere with platforms’ ability to speak. See Pacific Gas, 475 U.S. at 10–12, 16 (plurality op.) (noting that if the government could compel speakers to “propound . . . messages with which they disagree,” the First Amendment’s protection “would be empty, for the government could require speakers to affirm in one breath that which they deny in the next”).
The State’s second principle—that in order to trigger First Amendment scrutiny a regulation must create a risk that viewers or listeners might confuse a user’s and the platform’s speech—finds little support in our precedent. Consumer confusion simply isn’t a prerequisite to First Amendment protection. In Miami Herald, for instance, even though no reasonable observer would have mistaken a political candidate’s statutorily mandated right-to-reply column for the newspaper reversing its earlier criticism, the Supreme Court deemed the paper’s editorial judgment to be protected. See 418 U.S. at 244, 258. Nor was there a risk of consumer confusion in Turner: No reasonable person would have thought that the cable operator there endorsed every message conveyed by every speaker on every one of the channels it carried, and yet the Court stated categorically that the operator’s editorial discretion was protected. See 512 U.S. at 636–37. Moreover, it seems to us that the State’s confusion argument boomerangs back around on itself: If a platform announces a community standard prohibiting, say, hate speech, but is then barred from removing or even disclaiming posts containing what it perceives to be hate speech, there’s a real risk that a viewer might erroneously conclude that the platform doesn’t consider those posts to constitute hate speech.
The State’s final principle—that in order to receive First Amendment protection a platform must curate and present speech in such a way that a “common theme” emerges—is similarly flawed. Hurley held that “a private speaker does not forfeit constitutional protection simply by combining multifarious voices, or by failing to edit their themes to isolate an exact message as the exclusive subject matter of the speech.” 515 U.S. at 569–70; see FLFNB I, 901 F.3d at 1240 (citing Hurley for the proposition that a “particularized message” isn’t required for conduct to qualify for First Amendment protection). Moreover, even if one could theoretically attribute a common theme to a parade, Turner makes clear that no such theme is required: It seems to us inconceivable that one could ascribe a common theme to the cable operator’s choice there to carry hundreds of disparate channels, and yet the Court held that the First Amendment protected the operator’s editorial discretion….
In short, the State’s reliance on PruneYard and FAIR and its attempts to distinguish the editorial-judgment line of cases are unavailing.
How about the “common carrier” argument? Nope. Not at all.
The first version of the argument fails because, in point of fact, social-media platforms are not—in the nature of things, so to speak—common carriers. That is so for at least three reasons.
First, social-media platforms have never acted like common carriers. “[I]n the communications context,” common carriers are entities that “make a public offering to provide communications facilities whereby all members of the public who choose to employ such facilities may communicate or transmit intelligence of their own design and choosing”—they don’t “make individualized decisions, in particular cases, whether and on what terms to deal.” FCC v. Midwest Video Corp., 440 U.S. 689, 701 (1979) (cleaned up). While it’s true that social-media platforms generally hold themselves open to all members of the public, they require users, as preconditions of access, to accept their terms of service and abide by their community standards. In other words, Facebook is open to every individual if, but only if, she agrees not to transmit content that violates the company’s rules. Social-media users, accordingly, are not freely able to transmit messages “of their own design and choosing” because platforms make—and have always made—“individualized” content- and viewpoint-based decisions about whether to publish particular messages or users.
Second, Supreme Court precedent strongly suggests that internet companies like social-media platforms aren’t common carriers. While the Court has applied less stringent First Amendment scrutiny to television and radio broadcasters, the Turner Court cabined that approach to “broadcast” media because of its “unique physical limitations”—chiefly, the scarcity of broadcast frequencies. 512 U.S. at 637–39. Instead of “comparing cable operators to electricity providers, trucking companies, and railroads—all entities subject to traditional economic regulation”—the Turner Court “analogized the cable operators [in that case] to the publishers, pamphleteers, and bookstore owners traditionally protected by the First Amendment.” U.S. Telecom Ass’n v. FCC, 855 F.3d 381, 428 (D.C. Cir. 2017) (Kavanaugh, J., dissental); see Turner, 512 U.S. at 639. And indeed, the Court explicitly distinguished online from broadcast media in Reno v. American Civil Liberties Union, emphasizing that the “vast democratic forums of the Internet” have never been “subject to the type of government supervision and regulation that has attended the broadcast industry.” 521 U.S. 844, 868–69 (1997). These precedents demonstrate that social-media platforms should be treated more like cable operators, which retain their First Amendment right to exercise editorial discretion, than traditional common carriers.
Finally, Congress has distinguished internet companies from common carriers. The Telecommunications Act of 1996 explicitly differentiates “interactive computer services”—like social-media platforms—from “common carriers or telecommunications services.” See, e.g., 47 U.S.C. § 223(e)(6) (“Nothing in this section shall be construed to treat interactive computer services as common carriers or telecommunications carriers.”). And the Act goes on to provide protections for internet companies that are inconsistent with the traditional common-carrier obligation of indiscriminate service. In particular, it explicitly protects internet companies’ ability to restrict access to a plethora of material that they might consider “objectionable.” Id. § 230(c)(2)(A). Federal law’s recognition and protection of social-media platforms’ ability to discriminate among messages—disseminating some but not others—is strong evidence that they are not common carriers with diminished First Amendment rights.
Okay, but what if Florida just declares them to be common carriers? No, no, that’s not how any of this works either:
If social-media platforms are not common carriers either in fact or by law, the State is left to argue that it can force them to become common carriers, abrogating or diminishing the First Amendment rights that they currently possess and exercise. Neither law nor logic recognizes government authority to strip an entity of its First Amendment rights merely by labeling it a common carrier. Quite the contrary, if social-media platforms currently possess the First Amendment right to exercise editorial judgment, as we hold it is substantially likely they do, then any law infringing that right—even one bearing the terminology of “common carri[age]”—should be assessed under the same standards that apply to other laws burdening First-Amendment-protected activity. See Denver Area Educ. Telecomm. Consortium, Inc. v. FCC, 518 U.S. 727, 825 (1996) (Thomas, J., concurring in the judgment in part and dissenting in part) (“Labeling leased access a common carrier scheme has no real First Amendment consequences.”);Cablevision Sys. Corp. v. FCC, 597 F.3d 1306, 1321–22 (D.C. Cir. 2010) (Kavanaugh, J., dissenting) (explaining that because video programmers have a constitutional right to exercise editorial discretion, “the Government cannot compel [them] to operate like ‘dumb pipes’ or ‘common carriers’ that exercise no editorial control”); U.S. Telecom Ass’n, 855 F.3d at 434 (Kavanaugh, J., dissental) (“Can the Government really force Facebook and Google . . . to operate as common carriers?”)
Okay, then, how about that these websites are somehow so important that it magically means the state can regulate speech on them. Lol, nope, says the court:
The State seems to argue that even if platforms aren’t currently common carriers, their market power and public importance might justify their “legislative designation . . . as common carriers.” Br. of Appellants at 36; see Knight, 141 S. Ct. at 1223 (Thomas, J., concurring) (noting that the Court has suggested that common-carrier regulations “may be justified, even for industries not historically recognized as common carriers, when a business . . . rises from private to be a public concern” (quotation marks omitted)). That might be true for an insurance or telegraph company, whose only concern is whether its “property” becomes “the means of rendering the service which has become of public interest.” Knight, 141 S. Ct. at 1223 (Thomas, J., concurring) (quoting German All. Ins. Co. v. Lewis, 233 U.S. 389, 408 (1914)). But the Supreme Court has squarely rejected the suggestion that a private company engaging in speech within the meaning of the First Amendment loses its constitutional rights just because it succeeds in the marketplace and hits it big. See Miami Herald, 418 U.S. at 251, 258.
In short, because social-media platforms exercise—and have historically exercised—inherently expressive editorial judgment, they aren’t common carriers, and a state law can’t force them to act as such unless it survives First Amendment scrutiny.
So many great quotes in all of this.
Anyway, once the court has made it clear that content moderation is protected by the 1st Amendment, that’s not the end of the analysis. Because there are some cases in which the state can still regulate, but first it must pass strict scrutiny. And here, the court says, we’re not even close:
We’ll start with S.B. 7072’s content-moderation restrictions. While some of these provisions are likely subject to strict scrutiny, it is substantially likely that none survive even intermediate scrutiny. When a law is subject to intermediate scrutiny, the government must show that it “is narrowly drawn to further a substantial governmental interest . . . unrelated to the suppression of free speech.” FLFNB II, 11 F.4th at 1291. Narrow tailoring in this context means that the regulation must be “no greater than is essential to the furtherance of [the government’s] interest.” O’Brien, 391 U.S. at 377.
We think it substantially likely that S.B. 7072’s content-moderation restrictions do not further any substantial governmental interest—much less any compelling one. Indeed, the State’s briefing doesn’t even argue that these provisions can survive heightened scrutiny. (The State seems to have wagered pretty much everything on the argument that S.B. 7072’s provisions don’t trigger First Amendment scrutiny at all.) Nor can we discern any substantial or compelling interest that would justify the Act’s significant restrictions on platforms’ editorial judgment. We’ll briefly explain and reject two possibilities that the State might offer.
As for the argument that the state has to protect those poor, poor conservatives against “unfair” censorship, the court points out that’s not how this works:
The State might theoretically assert some interest in counteracting “unfair” private “censorship” that privileges some viewpoints over others on social-media platforms. See S.B. 7072 § 1(9). But a state “may not burden the speech of others in order to tilt public debate in a preferred direction,” Sorrell, 564 U.S. at 578–79, or “advance some points of view,” Pacific Gas, 475 U.S. at 20 (plurality op.). Put simply, there’s no legitimate—let alone substantial—governmental interest in leveling the expressive playing field. Nor is there a substantial governmental interest in enabling users—who, remember, have no vested right to a social-media account—to say whatever they want on privately owned platforms that would prefer to remove their posts: By preventing platforms from conducting content moderation—which, we’ve explained, is itself expressive First-Amendment-protected activity—S.B. 7072 “restrict[s] the speech of some elements of our society in order to enhance the relative voice of others”—a concept “wholly foreign to the First Amendment.” Buckley v. Valeo, 424 U.S. 1, 48–49 (1976). At the end of the day, preventing “unfair[ness]” to certain users or points of view isn’t a substantial governmental interest; rather, private actors have a First Amendment right to be “unfair”—which is to say, a right to have and express their own points of view. Miami Herald, 418 U.S. 258.
How about enabling more speech? That’s not the government’s job either:
The State might also assert an interest in “promoting the widespread dissemination of information from a multiplicity of sources.” Turner, 512 U.S. at 662. Just as the Turner Court held that the must-carry provisions served the government’s substantial interest in ensuring that American citizens were able to access their “local broadcasting outlets,” id. at 663–64, the State could argue that S.B. 7072 ensures that political candidates and journalistic enterprises are able to communicate with the public, see Fla. Stat. §§ 106.072(2); 501.2041(2)(f), (j). But it’s hard to imagine how the State could have a “substantial” interest in forcing large platforms—and only large platforms—to carry these parties’ speech: Unlike the situation in Turner, where cable operators had “bottleneck, or gatekeeper control over most programming delivered into subscribers’ homes,” 512 U.S. at 623, political candidates and large journalistic enterprises have numerous ways to communicate with the public besides any particular social-media platform that might prefer not to disseminate their speech—e.g., other more-permissive platforms, their own websites, email, TV, radio, etc. See Reno, 521 U.S. at 870 (noting that unlike the broadcast spectrum, “the internet can hardly be considered a ‘scarce’ expressive commodity” and that “[t]hrough the use of Web pages, mail exploders, and newsgroups, [any] individual can become a pamphleteer”). Even if other channels aren’t as effective as, say, Facebook, the State has no substantial (or even legitimate) interest in restricting platforms’ speech—the messages that platforms express when they remove content they find objectionable—to “enhance the relative voice” of certain candidates and journalistic enterprises. Buckley, 424 U.S. at 48–49
Another nice bit of language: the court says that the government can’t force websites to not use an algorithm to rank content (which is a big deal as many states are trying to do just that):
Finally, there is likely no governmental interest sufficient to justify forcing platforms to show content to users in a “sequential or chronological” order, see § 501.2041(2)(f), (g)—a requirement that would prevent platforms from expressing messages through post-prioritization and shadow banning.
Finally, there’s a great footnote that recognizes the problems we pointed out with regards to Texas’ law and the livestream of the mass murderer in Buffalo. The Court recognizes how the same issue could apply in Florida:
Even worse, S.B. 7072 would seemingly prohibit Facebook or Twitter from removing a video of a mass shooter’s killing spree if it happened to be reposted by an entity that qualifies for “journalistic enterprise” status.
And, that’s basically it. As noted up top, there are a few, fairly minor provisions that the court says should not be subject to the injunction, and we’ll have another post on that shortly. But for now, this is a pretty big win for the 1st Amendment, actual free speech, and the rights of private companies to moderate as they see fit.
Hilariously, Florida is pretending it won the ruling, because of the few smaller provisions that are no longer subject to the injunction. But this is a near complete loss for the state, and a huge win for free speech.
Filed Under: 11th circuit, 1st amendment, common carrier, content moderation, editorial discretion, editorial judgment, florida, free speech, kevin newsom, sb 7072, supreme court
Companies: ccia, netchoice
Just How Incredibly Fucked Up Is Texas’ Social Media Content Moderation Law?
from the let-us-count-the-ways dept
So, I already had a quick post on the bizarre decision by the 5th Circuit to reinstate Texas’ social media content moderation law just two days after a bizarrely stupid hearing on it. However, I don’t think most people actually understand just how truly fucked up and obviously unconstitutional the law is. Indeed, there are so many obvious problems with it, I’m not even sure I can do them adequate justice in a single post. I’ve seen some people say that it’s easy to comply with, but that’s wrong. There is no possible way to comply with this bill. You can read the full law here, but let’s go through the details.
The law declares social media platforms as “common carriers” and this was a big part of the hearing on Monday, even though it’s not at all clear what that actually means and whether or not a state can just magically declare a website a common carrier (as we’ve explained, that’s not how any of this works). But, it’s mainly weird because it doesn’t really seem to mean anything under Texas law. The law could have been written entirely without declaring them “common carriers” and I’m not sure how it would matter.
The law applies to “social media platforms” that have more than 50 million US monthly average users (based on whose counting? Dunno. Law doesn’t say), and limits it to websites where the primary purpose is users posting content to the site, not ones where things like comments and such are a secondary feature. It also excludes email and chat apps (though it’s unclear why). Such companies with over 50 million users in the US probably include the following as of today (via Daphne Keller’s recent Senate testimony): Facebook, YouTube, Tiktok, Snapchat, Wikipedia, and Pinterest are definitely covered. Likely, but not definitely, covered would be Twitter, LinkedIn, WordPress, Reddit, Yelp, TripAdvisor, and possibly Discord. Wouldn’t it be somewhat amusing if, after all of this, Twitter’s MAUs fall below the threshold?! Also possibly covered, though data is lacking: Glassdoor, Vimeo, Nextdoor, and Twitch.
And what would the law require of them? Well, mostly to get sued for every possible moderation decision. You only think I’m exaggerating. Litigator Ken White has a nice breakdown thread of how the law will encourage just an absolutely insane amount of wasteful litigation:
https://twitter.com/Popehat/status/1524535770425401344
As he notes, a key provision and the crux of the bill is this bizarre “anti-censorship” part:
CENSORSHIP PROHIBITED. (a) A social media platform may not censor a user, a user’s expression, or a user’s ability to receive the expression of another person based on: (1) the viewpoint of the user or another person; (2) the viewpoint represented in the user’s expression or another person’s expression; or (3) a user’s geographic location in this state or any part of this state. (b) This section applies regardless of whether the viewpoint is expressed on a social media platform or through any other medium.
So, let’s break this down. It says that a website cannot “censor” (by which it clearly means moderate) based on the user’s viewpoint or geographic location. And it applies even if that viewpoint doesn’t occur on the website.
What does that mean in practice? First, even if there is a good and justifiable reason for moderating the content — say it’s spam or harassment or inciting violence — that really doesn’t matter. The user can simply claim that it’s because of their viewpoints — even those expressed elsewhere — and force the company to fight it out in court. This is every spammer’s dream. Spammers would love to be able to force websites to accept their spam. And this law basically says that if you remove spam, the spammer can take you to court.
Indeed, nearly all of the moderation that websites like Twitter and Facebook do are, contrary to the opinion of ignorant ranters, not because of any “viewpoint” but because they’re breaking actual rules around harassment, abuse, spam, or the like.
While the law does say that a site must clearly post its acceptable use policy, so that supporters of this law can flat out lie and claim that a site can still moderate as long as it follows its policies, that’s not true. Because, again, all any aggrieved user has to do is to claim the real reason is due to viewpoint discrimination, and the litigation is on.
And let me tell you something about aggrieved users: they always insist that any moderation, no matter how reasonable, is because of their viewpoint. Always. And this is especially true of malicious actors and trolls, who are in the game of trolling just to annoy in the first place. If they can take that up a notch and drag companies into court as well? I mean, the only thing stopping them will be the cost, but you already know that a cottage industry is going to pop up of lawyers who will file these cases. I wouldn’t even be surprised if cases start getting filed today.
And, as Ken notes in his thread, the law seems deliberately designed to force as much frivolous litigation on these companies as possible. It says that even if one local court has rejected these lawsuits or blocked the Attorney General from enforcing the law, you can still sue in other districts. In other words, keep on forum shopping. Also, it has a nonmutual claim and issue preclusion, meaning that even if a court says that these claims are bogus, each new claim must be judged anew. Again, this seems uniquely designed to force these companies into court over and over and over again.
I haven’t even gotten to the bit that says that you can’t “censor” based on geographic location. That portion can basically be read to be forcing social media companies to stay in Texas. Because if you block all of your Texas users, they can all sue you, claiming that you’re “censoring” them based on their geographic location.
So, yeah, here you have the “free market” GOP passing a law that effectively says that social media companies (1) have to operate in Texas and (2) have to be sued over every moderation decision they make, even if it’s in response to clear policy violations.
Making it even more fun, the law forbids any waivers, so social media companies can’t just put a new thing in their terms of service saying that you waive your rights to bring a claim under this law. They really, really, really just want to flood every major social media website with a ton of purely frivolous and vexatious litigation. The party that used to decry trial lawyers just made sure that Texas has full employment for trial lawyers.
And that’s not all that this law does. That’s just the part about “censorship.”
There is the whole transparency bit, requiring that a website “disclose accurate information regarding its content management, data management, and business practices.” That certainly raises some issues about trade secrets, general security and more. But, it also is going to effectively require that websites publish all the details that spammers, trolls, and others need to be more effective.
The covered companies will also have to keep a tally over every form of moderation and post it in its transparency report. So, every time a spam posting is removed, it will need to be tracked and recorded. Even any time content is “deprioritized.” What does that mean? All of these companies recommend stuff based on algorithms, meaning that some stuff is prioritized and some stuff is not. I don’t care to see when people I follow tweet about football, because I don’t watch football. But it appears that if the algorithm learns that about me and chooses to deprioritize football tweets just for me, the company will need to include that in its transparency report.
Now, multiply that by every user, and every possible interaction. I think you could argue that these sites “deprioritize” content billions of times a day just by the natural functioning of the algorithm. How the hell do you track all the content you don’t show someone?!
The law also requires detailed, impossible complaint procedures, including a full tracking system if someone follows a complaint. That’s required as of last night. So best of wishes to every single covered platform, none of whom have this technology in place.
It also requires that if the website is alerted to illegal content, it has to determine whether or not the content is actually illegal within 48 hours. I’ll just note that, in most cases, even law enforcement isn’t that quick, and then there’s the whole judicial process that can take years to determine if something is illegal. Yet websites are given 48 hours?
Hilariously, the law says that you don’t have to give a user the opportunity to appeal if the platform “knows that the potentially policy-violating content relates to an ongoing law enforcement investigation.” Except, won’t this kind of tip people off? Your content gets taken down, but the site doesn’t give you the opportunity to appeal… Well, the only exemption there is if you’re subject to an ongoing law enforcement investigation, so I guess you now know there is one, because the law says that’s the only reason they can refuse to take your appeal. Great work there, Texas.
The appeal must be decided within 14 days, which sure sounds good if you have no fucking clue how long some of these investigations might take — especially once the system is flooded with the appeals required under this law.
And, that’s not all. Remember last week when I was joking about how Republicans wanted to make sure your inboxes were filled with spam? I had forgotten about the provision in this law that makes a lot of spam filtering a violation of the law. I only wish I was joking. For unclear reasons, the law also amends Texas’ existing anti-spam law. It added (and it’s already live in the law) a section saying the following:
Sec. 321.054. IMPEDING ELECTRONIC MAIL MESSAGES PROHIBITED. An electronic mail service provider may not intentionally impede the transmission of another person’s electronic mail message based on the content of the message unless:
(1) the provider is authorized to block the transmission under Section 321.114 or other applicable state or federal law; or
(2) the provider has a good faith, reasonable belief that the message contains malicious computer code, obscene material, material depicting sexual conduct, or material that violates other law.
So that literally says the only reasons you can “impede” email is if it contains malicious code, obscene material, sexual content, or violates other laws. Now the reference to 321.114 alleviates some of this, since that section gives services (I kid you not) “qualified immunity” for blocking certain commercial email messages, but only with certain conditions, including enabling a dispute resolution process for spammers.
There are many more problems with this law, but I am perplexed at how anyone could possibly think this is either workable or Constitutional. It’s neither. The only proper thing to do would be to shut down in Texas, but again the law treats that as a violation itself. What an utter monstrosity.
And, yes, I know, very very clueless people will comment here about how we’re just mad that we can’t “censor” people any more (even though it’s got nothing to do with me or censoring). But can you at least try to address some of the points raised above and explain how any of these services can actually operate without getting sued out of existence, or allowing all garbage all the time to fill the site?
Filed Under: 1st amendment, appeals, common carrier, content moderation, editorial discretion, email, free speech, hb20, litigation, social media, texas, transparency, viewpoint discrimination
The 5th Circuit Reinstates Texas’ Obviously Unconstitutional Social Media Law Effective Immediately
from the what-a-clusterfuck dept
Florida and Texas both passed blatantly unconstitutional laws limiting the ability of social media websites to moderate. Lawsuits were filed challenging both laws. In both cases, the district courts correctly blocked the laws from going into effect, noting that it was obviously a 1st Amendment violation to tell websites how they could and could not moderate. Both states appealed. A few weeks back there was a hearing in the 11th Circuit over the Florida law, where it became quite clear that the judges seemed to grasp the issues, and had lots of really tough questions for Florida’s lawyers. However, they have not issued an actual ruling yet.
On Monday of this week, the notoriously bad about everything 5th Circuit heard Texas’s appeal on its law, and the hearing went sideways from the very beginning, with one of the judges even trying to argue that Twitter wasn’t a website. That was only the tip of the iceberg of misunderstanding the three judge panel presented, confusing a number of issues around free speech, common carriers, private property and more. Based on the hearing, it seemed likely that the court was going to make a huge mess of things, but even then, it would be normal to take a few months to think about it, and maybe (hopefully?) reread the briefings. Also, standard practice would be to release a ruling where there would be a nominal period in which to file some sort of appeal. Instead, late Wednesday, the court just reinstated the law with no explanation at all.
An opinion is likely to follow at some point, but the whole setup of everything is bizarre and not very clear at all. The only bit of info provided is that the panel was not unanimous, suggesting that Judge Southwick, who seemed to have a better grasp of the matter than his two colleagues, probably went the other way.
So… what does this mean? Well, Texas is now a mess for any social media company. Operating in Texas and daring to do something as basic as stopping harassment and abuse on your platform now opens you up to significant litigation and potential fines. It strips editorial discretion, the right to cultivate your own community, and much much more that is fundamentally necessary to running a website with 3rd party content. I’ll have a second post later today exploring the many, many ways in which this law is effectively impossible to comply with.
I am positive that every decently sized social media company had to talk to its lawyers Wednesday evening and assess whether or not it makes sense to block access to everyone in Texas (even though some of the language in the bill suggests that it requires companies to operate in Texas). Others may decide to open the floodgates of hate, harassment, and abuse and say “well, this is what you required.” And it still won’t result in them not getting sued.
For what it’s worth, Trump’s own website, Truth Social, has moderation practices that clearly run afoul of this law, and he’s only protected from it to the extent that it still has less than 50 million monthly users.
It would be nice if the 11th Circuit came out with their ruling going the opposite way, and did so in a clear and reasoned fashion, setting up a circuit split that the Supreme Court could review. But that seems unlikely. I’ve been told that the judges on the 11th Circuit panel are famous for their excessively slow writing of opinions. The tech companies could seek an en banc review from the entire 5th Circuit, though much of the 5th Circuit is ridiculous and I’m not convinced it would help at all. There could be an attempt to appeal immediately to the Supreme Court’s shadow docket, but that’s also fundamentally an unknown arena right now.
So, in summary, Texas is fucked. Social media in Texas is now a risky proposition. And whether or not the companies continue to operate in Texas, the floodgates have been opened for ridiculous lawsuits. If you thought that Texas lawsuits over patent trolls created an entire industry unto itself, you haven’t seen anything yet.
Filed Under: 1st amendment, 5th circuit, common carrier, content moderation, discrimination, free speech, hb20, liability, social media, texas