editorial discretion – Techdirt (original) (raw)
Trump’s Latest Lawsuit Against CBS Proves He’s No Free Speech Champion
from the that's-not-how-free-speech-works dept
As we head into the election tomorrow, there has been some general talk about how many people think that Donald Trump is somehow better on things like free speech and the economy. It’s pretty clear that that is wrong. On the economy, it’s evident he has no clue what he’s talking about and his plan on both tariffs and deportations would tank the US economy massively.
But the free speech claims are even more bizarre. During his first presidential campaign, he threatened to “open up” our libel laws to make it easier to sue. And while he was unable to do that, it hasn’t stopped him from regularly suing the media for its free speech in a series of SLAPP suits designed to silence and suppress their speech, while frightening others away from speaking up against him in any way.
Late last week, Donald Trump filed another one of his anti-free speech lawsuits, and this one is way crazier than the others. First of all, this one is directly with him as the plaintiff (some of the ones in the past have been on behalf of his campaign). But this one isn’t even about what a media property said about Donald Trump. No, he’s suing CBS claiming that the way it edited a 60 Minutes interview with Kamala Harris violates that Texas Deceptive Trade Practices Act (DTPA).
This action concerns CBS’s partisan and unlawful acts of election and voter interference through malicious, deceptive, and substantial news distortion calculated to (a) confuse, deceive, and mislead the public, and (b) attempt to tip the scales in favor of the Democratic Party as the heated 2024 Presidential Election—which President Trump is leading— approaches its conclusion, in violation of Tex. Bus. & Comm. Code § 17.46(a), which subjects “[f]alse, misleading, or deceptive acts or practices in the conduct of any trade or commerce” to suit under Tex. Bus. & Comm. Code §17.50(a)(1). See Texas Deceptive Trade Practices-Consumer Protection Act (the “DTPA”), Tex. Bus. & Comm. Code § 17.41 et seq.
This is one of the most blatant attacks on free speech rights and the First Amendment I’ve seen in a while. It’s literally saying that he can sue a news organization if he doesn’t like how they edit a story. Editorial discretion is among the very clearly protected rights of a news organization under the First Amendment.
Trump’s entire argument is that when she appeared on 60 Minutes (which also invited Trump, though he skipped out on it after initially agreeing) they edited one of her answers to make it shorter. But, um, that’s what they always do? In an edited “magazine style” TV show, as Trump well knows having done a bunch of these, they talk to you for a much longer time than they have to air, and then they air only portions of both the questions and the answers.
Indeed, it’s easy to show that if anyone has benefited from the media’s willingness to take rambling, incoherent answers and make them sound normal, it’s Donald Trump. The media does this to him nearly every single day.
Anyway, if we’re talking about word salad, here’s Donald Trump last night appearing to practically fall asleep mid-sentence talking about how “a whistleblower released the information on the 18 on the 800,000 [pause] cobs plus [longer pause]. The whistleblower said, you know, there were not 800,000 and 18,000, you add ’em upissst, and then you add 100 and think of it. 112,000 jobs.”
In the case of CBS, it aired the shorter, more concise version of Harris’s answer on 60 Minutes, leaving out some of the explanatory rambling before getting to the details. Earlier, on Face the Nation, they played a longer clip that included some explanatory language that wasn’t a “word salad” as the complaint argues (yes, the complaint directly calls it a “word salad”) but is perhaps not particularly eloquent.
In both versions of the Interview (the “October 5 Version” and the “October 6 Version”), Whitaker asks Kamala about Israeli Prime Minister Benjamin Netanyahu. Whitaker says to Kamala: “But it seems that Prime Minister Netanyahu is not listening.”
In the October 5 Version, aired on the CBS Sunday morning news show Face the Nation, Kamala replies to Whitaker with her typical word salad: “Well, Bill, the work that we have done has resulted in several movements in that region by Israel that were very much prompted by or a result of many things, including our advocacy for what needs to happen in the region.”
In the October 6 Version, aired on CBS’s 60 Minutes, Kamala appears to reply to Whitaker with a completely different, more succinct answer: “We are not gonna [sic] stop pursuing what is necessary for the United States to be clear about where we stand on the need for this war to end.”
Incredibly, the complaint itself includes Trump ranting about all of this, kinda highlighting how he is way more prone to “word salad” than his opponent.
As President Trump stated, and as made crystal clear in the video he referenced and attached, “A giant Fake News Scam by CBS & 60 Minutes. Her REAL ANSWER WAS CRAZY, OR DUMB, so they actually REPLACED it with another answer in order to save her or, at least, make her look better. A FAKE NEWS SCAM, which is totally illegal. TAKE AWAY THE CBS LICENSE. Election Interference. She is a Moron, and the Fake News Media wants to hide that fact. An UNPRECEDENTED SCANDAL!!! The Dems got them to do this and should be forced to concede the Election? WOW!”). See President Donald J. Trump, TRUTH SOCIAL (Oct. 10, 2024)
Yes, somehow Trump’s lawyers think this makes him look good. They also seem to think that referring to Trump as “President Trump” but referring to Vice President Harris as “Kamala” makes this look like a serious case.
It is not. It is clearly an attack on basic First Amendment rights and free speech law. It is an attack on the editorial discretion of CBS, the very same editorial discretion that Trump regularly benefits from.
He is attacking the First Amendment and free speech by using bogus lawsuits to challenge those in the media who don’t portray things the way he wants them portrayed. That is a fundamental attack on free speech.
And, as Eugene Volokh explains, these issues have been covered before in court, in the context of “false” statements. This case isn’t even about false statements, just Trump not liking how an interview was edited.
That said, Trump filed this case in the Northern District of Texas, Amarillo Division, guaranteeing that Judge Matthew Kacsmaryk will hear it. Kacsmaryk is considered one of a small group of “the worst judges in America” as someone in a single judge division who is known as the go-to judge for Trumpists looking to “rubber-stamp their looniest ideas.”
There is no reason for this case to be in Texas, as Trump is a Florida resident, and the two CBS organizations he is suing are based in New York and Delaware. By any sane measure, the case would be tossed on jurisdiction alone.
I’ve also seen some people argue that RFK Jr.’s embrace by Trump is again about “free speech,” but that is similarly nonsense.
RFK Jr. has been filing a ton of bogus lawsuits over private entities’ editorial discretion and is now backing up Trump in arguing that CBS should “lose its license.”
This is a pretty incredible thing for RFK Jr. to be saying, considering at this very moment he is suing the Biden administration, falsely claiming that they made Facebook block his anti-vax nonsense.
So, if you’re following RFK Jr.’s logic, it’s an obvious First Amendment violation that Facebook blocked his anti-vax statements which violated their own policies, because the White House also agreed that RFK’s anti-vax claims were dangerous. But it’s not a First Amendment violation for Donald Trump to “pull CBS’s license” for how it edited an interview?
The only “principle” here is “it’s not okay if it happens to me, but it’s totally okay if we do it when we’re in power.”
That’s not about principled free speech.
And that’s not even getting into how little either Trump or RFK Jr. understand how this works. CBS doesn’t have “a license” to pull. Affiliate stations have broadcast spectrum licenses, and the government isn’t supposed to punish them based on what they cover or how. Yes, CBS has a small number (15 across the country) of affiliates that are “owned and operated” by the company, but the vast majority (236) are owned by other entities. So even the idea of “pulling CBS’s license” makes no logical sense.
But, either way, as we head into election day, the idea that Donald Trump is a free speech supporter is literally backwards. He’s spent years suing people for their speech, and now he’s even doing it in response to editorial discretion he dislikes. Donald Trump has no conception of free speech. He only supports speech he likes, and he is eager to punish any speech he dislikes.
Filed Under: 1st amendment, 60 minutes, deceptive practices, donald trump, editorial discretion, free speech, kamala harris, matthew kacsmaryk, rfk jr., texas
Companies: cbs
In Content Moderation Cases, Supreme Court Says ‘Try Again’ – But Makes It Clear Moderation Deserves First Amendment Protections
from the mostly-good-news dept
Today, the Supreme Court made it pretty clear that websites have First Amendment rights to do content moderation as they see fit, but decided to send the cases challenging laws in Florida and Texas back to the lower courts to be litigated properly, effectively criticizing the litigation posture of the trade groups, NetChoice and CCIA, which brought the challenges in the first place. However, in doing so, the majority of the court also was pretty explicit that the Fifth Circuit got everything wrong all over again.
The Supreme Court waited until the very last day of the term to finally release its decisions in the cases regarding Florida and Texas’s social media moderation laws. I’m not going to go through a full history of either, as we’ve covered them in detail in the past, but both laws sought to place restrictions on how social media companies could moderate content in certain circumstances (generally political). The question at the heart of both cases was whether or not governments could compel private websites to host speech that those websites didn’t wish to host (i.e., violated their terms of service).
Both district courts rejected that premise as obviously unconstitutional. The appeals courts split, however. The 11th Circuit agreed that the law was mostly unconstitutional (though it allowed one problematic provision on transparency to continue). The 5th Circuit went rogue, upending a century’s worth of First Amendment law to say of course Texas has a right to compel websites to host speech that violates their rules.
The Supreme Court took its sweet time in dealing with this case, and now sends both cases back to the lower courts, saying that everyone did the analysis wrong: specifically by assuming the laws only applied to social media sites like Facebook and YouTube, when the reality is that they also probably apply to lots of other sites as well, and need to be analyzed on that basis.
The overall opinion on that point was 9-0, but there’s a bit of messiness involved in the rest, with some concurrences in parts and Alito, Thomas, and Gorsuch concurring only with the bottom line that the cases were decided on the wrong basis but insisting that the rest of the majority opinion, written by Justice Kagan, is unnecessary dicta that has no impact.
And while that may technically be true, that dicta makes some pretty strong and important points regarding the First Amendment rights of private platforms to moderate as they see fit, while the concurrence by Alito seems to disagree with Alito’s own dissent in the Murthy case from just last week.
Here’s a relatively quick analysis of the decision, and I’m sure we’ll have deeper, more nuanced analyses going forward.
Kagan starts off the majority opinion by citing back to the Reno v. ACLU case, which tossed out the Communications Decency Act (but not Section 230) as unconstitutional, and established some basic principles regarding how the First Amendment applies to the internet. And while the opinion notes that the internet has changed a lot, the First Amendment still applies:
But courts still have a necessary role in protecting those entities’ rights of speech, as courts have historically protected traditional media’s rights. To the extent that social media platforms create expressive products, they receive the First Amendment’s protection. And although these cases are here in a preliminary posture, the current record suggests that some platforms, in at least some functions, are indeed engaged in expression. In constructing certain feeds, those platforms make choices about what third-party speech to display and how to display it. They include and exclude, organize and prioritize—and in making millions of those decisions each day, produce their own distinctive compilations of expression. And while much about social media is new, the essence of that project is something this Court has seen before. Traditional publishers and editors also select and shape other parties’ expression into their own curated speech products. And we have repeatedly held that laws curtailing their editorial choices must meet the First Amendment’s requirements. The principle does not change because the curated compilation has gone from the physical to the virtual world. In the latter, as in the former, government efforts to alter an edited compilation of third-party expression are subject to judicial review for compliance with the First Amendment.
But, in the end, the cases are sent back on somewhat technical grounds, because the courts should have reviewed the “facial nature” of the challenge. This was the issue that came up a lot during oral arguments. In short: was the challenge to the law itself (facial), or to how it was applied (as applied)? And, the majority basically says rather than spending so much time talking about what it would mean if the law were applied to social media sites specifically, the courts should have taken a step back to look at the entire law and whether or not it was constitutional at all.
Today, we vacate both decisions for reasons separate from the First Amendment merits, because neither Court of Appeals properly considered the facial nature of NetChoice’s challenge. The courts mainly addressed what the parties had focused on. And the parties mainly argued these cases as if the laws applied only to the curated feeds offered by the largest and most paradigmatic social-media platforms—as if, say, each case presented an as-applied challenge brought by Facebook protesting its loss of control over the content of its News Feed. But argument in this Court revealed that the laws might apply to, and differently affect, other kinds of websites and apps. In a facial challenge, that could well matter, even when the challenge is brought under the First Amendment. As explained below, the question in such a case is whether a law’s unconstitutional applications are substantial compared to its constitutional ones. To make that judgment, a court must determine a law’s full set of applications, evaluate which are constitutional and which are not, and compare the one to the other. Neither court performed that necessary inquiry.
In effect, this means that the underlying issues in this case are almost certainly going to come right back to the Supreme Court in another year or two. But still, Kagan makes it pretty clear that there are lots of elements in these laws that appear to attack the First Amendment rights of websites. In setting forth “the relevant constitutional principles” it becomes pretty clear that the Fifth Circuit’s total nuttiness concerns the court.
Contrary to what the Fifth Circuit thought, the current record indicates that the Texas law does regulate speech when applied in the way the parties focused on below—when applied, that is, to prevent Facebook (or YouTube) from using its content-moderation standards to remove, alter, organize, prioritize, or disclaim posts in its News Feed (or homepage). The law then prevents exactly the kind of editorial judgments this Court has previously held to receive First Amendment protection. It prevents a platform from compiling the third-party speech it wants in the way it wants, and thus from offering the expressive product that most reflects its own views and priorities. Still more, the law—again, in that specific application—is unlikely to withstand First Amendment scrutiny. Texas has thus far justified the law as necessary to balance the mix of speech on Facebook’s News Feed and similar platforms; and the record reflects that Texas officials passed it because they thought those feeds skewed against politically conservative voices. But this Court has many times held, in many contexts, that it is no job for government to decide what counts as the right balance of private expression—to “un-bias” what it thinks biased, rather than to leave such judgments to speakers and their audiences. That principle works for social-media platforms as it does for others.
The majority’s concern then is really just on how the case was litigated, in which it was brought as a facial challenge to the law itself, but litigated as if it were an as-applied challenge. And that meant the record is incomplete for a full facial challenge.
The parties have not briefed the critical issues here, and the record is underdeveloped. So we vacate the decisions below and remand these cases. That will enable the lower courts to consider the scope of the laws’ applications, and weigh the unconstitutional as against the constitutional ones.
But, again and again, the decision still makes it pretty clear that six out of the nine Justices appear to recognize just how crazy these laws are, and just how wrong the Fifth Circuit was in deciding that the law in Texas was just peachy.
But it is necessary to say more about how the First Amendment relates to the laws’ content-moderation provisions, to ensure that the facial analysis proceeds on the right path in the courts below. That need is especially stark for the Fifth Circuit. Recall that it held that the content choices the major platforms make for their main feeds are “not speech” at all, so States may regulate them free of the First Amendment’s restraints. 49 F. 4th, at 494; see supra, at 8. And even if those activities were expressive, the court held, Texas’s interest in better balancing the marketplace of ideas would satisfy First Amendment scrutiny. See 49 F. 4th, at 482. If we said nothing about those views, the court presumably would repeat them when it next considers NetChoice’s challenge. It would thus find that significant applications of the Texas law—and so significant inputs into the appropriate facial analysis—raise no First Amendment difficulties. But that conclusion would rest on a serious misunderstanding of First Amendment precedent and principle. The Fifth Circuit was wrong in concluding that Texas’s restrictions on the platforms’ selection, ordering, and labeling of third-party posts do not interfere with expression. And the court was wrong to treat as valid Texas’s interest in changing the content of the platforms’ feeds. Explaining why that is so will prevent the Fifth Circuit from repeating its errors as to Facebook’s and YouTube’s main feeds. (And our analysis of Texas’s law may also aid the Eleventh Circuit, which saw the First Amendment issues much as we do, when next considering NetChoice’s facial challenge.) But a caveat: Nothing said here addresses any of the laws’ other applications, which may or may not share the First Amendment problems described below
The majority opinion, rightly, points to the important Miami Herald v. Tornillo case that said that newspapers have the right to decide not to publish someone’s political views if they chose not to. Much of the debate in all of the cases around these laws was whether or not websites were more like newspapers, in which the Miami Herald ruling would apply, or if they were more like telephone lines, in which common carrier rules could apply. The majority pointing to Miami Herald suggests they realize (correctly) how the First Amendment works here.
The seminal case is Miami Herald Publishing Co. v. Tornillo, 418 U. S. 241 (1974). There, a Florida law required a newspaper to give a political candidate a right to reply when it published “criticism and attacks on his record.” Id., at 243. The Court held the law to violate the First Amendment because it interfered with the newspaper’s “exercise of editorial control and judgment.” Id., at 258. Forcing the paper to print what “it would not otherwise print,” the Court explained, “intru[ded] into the function of editors.” Id., at 256, 258. For that function was, first and foremost, to make decisions about the “content of the paper” and “[t]he choice of material to go into” it. Id., at 258. In protecting that right of editorial control, the Court recognized a possible downside. It noted the access advocates’ view (similar to the States’ view here) that “modern media empires” had gained ever greater capacity to “shape” and even “manipulate popular opinion.” Id., at 249–250. And the Court expressed some sympathy with that diagnosis. See id., at 254. But the cure proposed, it concluded, collided with the First Amendment’s antipathy to state manipulation of the speech market. Florida, the Court explained, could not substitute “governmental regulation” for the “crucial process” of editorial choice.
The fact that social media shows most content and only limits a very small amount doesn’t change the First Amendment analysis from the Miami Herald case (despite what some nonsense peddlers insisted):
That those platforms happily convey the lion’s share of posts submitted to them makes no significant First Amendment difference. Contra, 49 F. 4th, at 459–461 (arguing otherwise). To begin with, Facebook and YouTube exclude (not to mention, label or demote) lots of content from their News Feed and homepage. The Community Standards and Community Guidelines set out in copious detail the varied kinds of speech the platforms want no truck with. And both platforms appear to put those manuals to work. In a single quarter of 2021, Facebook removed from its News Feed more than 25 million pieces of “hate speech content” and almost 9 million pieces of “bullying and harassment content.” App. in No. 22–555, at 80a. Similarly, YouTube deleted in one quarter more than 6 million videos violating its Guidelines. See id., at 116a. And among those are the removals the Texas law targets. What is more, this Court has already rightly declined to focus on the ratio of rejected to accepted content.
And, yes, the decision notes, users may attribute views to the platforms themselves based on what they allow or disallow:
Similarly, the major social-media platforms do not lose their First Amendment protection just because no one will wrongly attribute to them the views in an individual post. Contra, 49 F. 4th, at 462 (arguing otherwise). For starters, users may well attribute to the platforms the messages that the posts convey in toto. Those messages—communicated by the feeds as a whole—derive largely from the platforms’ editorial decisions about which posts to remove, label, or demote. And because that is so, the platforms may indeed “own” the overall speech environment. In any event, this Court has never hinged a compiler’s First Amendment protection on the risk of misattribution. The Court did not think in Turner—and could not have thought in Tornillo or PG&E—that anyone would view the entity conveying the third-party speech at issue as endorsing its content.
As for the favorite two cases of those pushing these laws, Pruneyard (about a shopping mall) and FAIR (about allowing military recruiters on campus), the Court notes that the organizations involved in both were not expressive by nature, as opposed to social media, which is expressive.
To be sure, the Court noted in PruneYard and FAIR, when denying such protection, that there was little prospect of misattribution. See 447 U. S., at 87; 547 U. S., at 65. But the key fact in those cases, as noted above, was that the host of the third party speech was not itself engaged in expression. See supra, at 16–17. The current record suggests the opposite as to Facebook’s News Feed and YouTube’s homepage. When the platforms use their Standards and Guidelines to decide which third-party content those feeds will display, or how the display will be ordered and organized, they are making expressive choices. And because that is true, they receive First Amendment protection.
Even more interesting: the Court notes that the Texas law almost certainly couldn’t even survive lower levels of First Amendment scrutiny because the entire point of the law is to suppress free speech.
In the usual First Amendment case, we must decide whether to apply strict or intermediate scrutiny. But here we need not. Even assuming that the less stringent form of First Amendment review applies, Texas’s law does not pass. Under that standard, a law must further a “substantial governmental interest” that is “unrelated to the suppression of free expression.” United States v. O’Brien, 391 U. S. 367, 377 (1968). Many possible interests relating to social media can meet that test; nothing said here puts regulation of NetChoice’s members off-limits as to a whole array of subjects. But the interest Texas has asserted cannot carry the day: It is very much related to the suppression of free expression, and it is not valid, let alone substantial.
Indeed, the statements from Texas politicians pushing the law undermine the law pretty clearly:
Texas has never been shy, and always been consistent, about its interest: The objective is to correct the mix of speech that the major social-media platforms present. In this Court, Texas described its law as “respond[ing]” to the platforms’ practice of “favoring certain viewpoints.” Brief for Texas 7; see id., at 27 (explaining that the platforms’ “discrimination” among messages “led to [the law’s] enactment”). The large social-media platforms throw out (or encumber) certain messages; Texas wants them kept in (and free from encumbrances), because it thinks that would create a better speech balance. The current amalgam, the State explained in earlier briefing, was “skewed” to one side. 573 F. Supp. 3d, at 1116. And that assessment mirrored the stated views of those who enacted the law, save that the latter had a bit more color. The law’s main sponsor explained that the “West Coast oligarchs” who ran social media companies were “silenc[ing] conservative viewpoints and ideas.” Ibid. The Governor, in signing the legislation, echoed the point: The companies were fomenting a “dangerous movement” to “silence” conservatives. Id., at 1108; see id., at 1099 (“[S]ilencing conservative views is unAmerican, it’s un-Texan and it’s about to be illegal in Texas”).
But a State may not interfere with private actors’ speech to advance its own vision of ideological balance. States (and their citizens) are of course right to want an expressive realm in which the public has access to a wide range of views. That is, indeed, a fundamental aim of the First Amendment. But the way the First Amendment achieves that goal is by preventing the government from “tilt[ing] public debate in a preferred direction.” Sorrell v. IMS Health Inc., 564 U. S. 552, 578–579 (2011). It is not by licensing the government to stop private actors from speaking as they wish and preferring some views over others. And that is so even when those actors possess “enviable vehicle[s]” for expression. Hurley, 515 U. S., at 577. In a better world, there would be fewer inequities in speech opportunities; and the government can take many steps to bring that world closer. But it cannot prohibit speech to improve or better balance the speech market.
And Texas can’t do that:
They cannot prohibit private actors from expressing certain views. When Texas uses that language, it is to say what private actors cannot do: They cannot decide for themselves what views to convey. The innocent-sounding phrase does not redeem the prohibited goal. The reason Texas is regulating the content moderation policies that the major platforms use for their feeds is to change the speech that will be displayed there. Texas does not like the way those platforms are selecting and moderating content, and wants them to create a different expressive product, communicating different values and priorities. But under the First Amendment, that is a preference Texas may not impose.
And thus, while the Court is sending the case back to the lower courts to review correctly under the necessary standards for a facial challenge, it makes it clear that the Fifth Circuit really fucked up its analysis, even if just to how social media functions:
But there has been enough litigation already to know that the Fifth Circuit, if it stayed the course, would get wrong at least one significant input into the facial analysis. The parties treated Facebook’s News Feed and YouTube’s homepage as the heartland applications of the Texas law. At least on the current record, the editorial judgments influencing the content of those feeds are, contrary to the Fifth Circuit’s view, protected expressive activity. And Texas may not interfere with those judgments simply because it would prefer a different mix of messages. How that matters for the requisite facial analysis is for the Fifth Circuit to decide. But it should conduct that analysis in keeping with two First Amendment precepts. First, presenting a curated and “edited compilation of [third party] speech” is itself protected speech. Hurley, 515 U. S., at 570. And second, a State “cannot advance some points of view by burdening the expression of others.” PG&E, 475 U. S., at 20. To give government that power is to enable it to control the expression of ideas, promoting those it favors and suppressing those it does not. And that is what the First Amendment protects all of us from.
Of the concurrences, Justice Barrett leans harder on the idea that NetChoice should have brought an “as applied” challenge, rather than a facial challenge. Justice Jackson also seems to feel that the litigation and the lower courts went too far in their analysis, and not just what was being challenged.
Justice Thomas wrote a concurrence with the underlying decision, but then whines for many pages about the rest of the majority’s analysis regarding the Fifth Circuit, saying that it is a waste of time, and also that it’s too early to be deciding these issues. He goes on for many pages slamming other Supreme Court decisions as well for being too broad. And, just to show how wrong he is, starts talking about “common carriers,” something even the final Fifth Circuit ruling wouldn’t fully endorse.
Justice Alito wrote a similar concurrence (which Thomas and Gorsuch sign onto) basically saying “we only agree that the cases should be sent back to the courts below to be evaluated as a facial challenge, and everything else in the majority decision is useless nonsense:
The holding in these cases is narrow: NetChoice failed to prove that the Florida and Texas laws they challenged are facially unconstitutional. Everything else in the opinion of the Court is nonbinding dicta.
I agree with the bottom line of the majority’s central holding. But its description of the Florida and Texas laws, as well as the litigation that shaped the question before us, leaves much to be desired. Its summary of our legal precedents is incomplete. And its broader ambition of providing guidance on whether one part of the Texas law is unconstitutional as applied to two features of two of the many platforms that it reaches—namely, Facebook’s News Feed and YouTube’s homepage—is unnecessary and unjustified.
In the end, these cases are not over. They’ll go back below and we’ll get more decisions and there’s a decent enough chance that the cases will end up back before the Supreme Court again. But there is a lot in the majority opinion which makes it clear that the Fifth Circuit’s decision was absolutely as nutty and ridiculous as I described when it came out. And that part of the decision is supported by Kagan, Sotomayor, Roberts, Kavanaugh, and Barrett (in other words, five of the nine Justices). And it’s mostly supported by Jackson (she just didn’t sign on to the full analysis of the Texas law’s many Constitutional problems, suggesting it was too early to do so).
This is a good sign for the overall internet and the First Amendment rights of websites to have editorial discretion in how they moderate.
Filed Under: 11th circuit, 1st amendment, 5th circuit, as applied, clarence thomas, content moderation, editorial discretion, elena kagan, facial challenge, florida, free speech, samuel alito, supreme court
Companies: ccia, netchoice
How Would The GOP Feel If Democrats In Congress Demanded Details Regarding How Fox News Or The NY Post Made Editorial Decisions?
from the fucking-hypocrites dept
We’ve already talked a bit about how Rep. Jim Jordan’s “Subcommittee on the Weaponization of the Government” is the exact thing it claims it seeks to stop: a part of the government that is being weaponized to attack free speech.
This week, Jordan sent a letter to Mark Zuckerberg, demanding he reveal a bunch of information regarding how Meta’s new Twitter-competitor is handling moderation:
The Committee on the Judiciary is conducting oversight of how and to what extent the Executive Branch has coerced and colluded with companies and other intermediaries to censor speech. In furtherance of this oversight, on February 15, 2023, the Committee issued a subpoena to you compelling the production of documents related to content moderation and Meta’s engagements with the Executive Branch. In light of Meta’s introduction of a new social media platform, “Threads,” we write to inform you that it is the Committee’s view that the subpoena of February 15 covers material to date relating to Threads.
Now, imagine if the Democrats were in control over the House, and they formed a committee that sent a similar subpoena to Fox News or to the NY Post “compelling” either of those orgs to detail how it made editorial choices, what stories it would cover, what opinion writers it would publish, or what stories would go on the front page with what headlines?
People would (rightly!) be up in arms over it, calling out a gross violation of the 1st Amendment, in which the government was demanding to interfere in 1st Amendment protected editorial choices.
That’s exactly what’s happening here. Content moderation decisions by companies are editorial choices, protected by the 1st Amendment, and Congress (or any government officials) has no business getting involved.
Hilariously, the letter points to the ruling in Louisiana that argued that the Biden administration unfairly sought to influence moderation decisions as a reason why Meta must reveal its editorial policies to the government.
Given that Meta has censored First Amendment-protected speech as a result of government agencies’ requests and demands in the past, the Committee is concerned about potential First Amendment violations that have occurred or will occur on the Threads platform. Indeed, Threads raises serious, specific concerns because it has been marketed as rival of Elon Musk’s Twitter, which has faced political persecution from the Biden Administration following Musk’s commitment to free speech. In contrast, there are reports that Threads will enforce “Instagram’s community guidelines,” which resulted in lawful speech being moderated following pressure by the government. Despite launching only 12 days ago, there are reports that Threads is already engaging in censorship, including censoring users but offering no grounds for appeal.
Now, remember, in that ruling, Judge Terry Doughty explicitly called out as pernicious “the power of the government to pressure social-media platforms to change their policies and to suppress free speech.” Now tell me how this letter is not abusing the power of government to pressure Meta to change its policies and suppress free speech?
For what it’s worth, almost everything Jordan writes in the paragraph above is bullshit. Threads’ decisions on moderation are not a 1st Amendment violation, because Meta is a private company and can moderate how it sees fit. Not having an appeal option may be stupid, but it’s none of the government’s business.
Also, I legitimately laughed outloud reading the line about Elon Musk’s “commitment to free speech.” Remember, he’s been suspending journalist accounts when they say stuff he doesn’t like. Most recently he took down Aaron Greenspan’s accounts, after Greenspan had become a thorn in his side. What “commitment to free speech”?
Anyway, the whole thing is exactly what Jordan pretends he wants to stop. So, again, anyone defending this absolute bullshit needs to answer how they would feel if a subcommittee headed by, say, Rep. Adam Schiff, were sending identical letters and subpoenas to Fox News, how would they react? It would be wrong for Schiff to do that, and it’s wrong now for Jordan to be doing this and anyone who actually believes in the 1st Amendment should be calling out this kind of bullshit.
Filed Under: 1st amendment, content moderation, editorial discretion, free speech, intimidation, jim jordan, weaponization subcommittee
Companies: meta, threads
House Oversight Committee Wanted To Berate Twitter’s Old Management Over Hunter Biden’s Laptop; Instead, It Revealed Trump Censorship Attempts
from the about-that dept
I have a confession. While yesterday the House Oversight Committee took up six hours (sorta, as there was a big power outage in the middle) wasting everyone’s time with a hearing on “Twitter’s Role in Suppressing the Biden Laptop Story,” I chose not to watch it in real-time. Instead, afterwards I went back and watched the video at 3x speed (and skipped over the giant power outage part), meaning I was able to watch the whole thing in less than two hours. If you, too, wish to subject yourself to this abject nonsense, I highly recommend doing something similar. Though, a better option would be just not to waste your time.
Unfortunately, the panelists — four former Twitter employees — had neither option at hand and had to sit through all of the craziness. By this point, I’m kind of used to absolutely ridiculous hearings in Congress trying to “grill” tech execs over things. They have a familiar pattern. The elected officials engage in pure grandstanding, ironically deliberately designed to try to make clips of them go viral on the very social media they’re criticizing. But this one was even worse. Honestly, the four witnesses — former deputy general counsel James Baker, former legal chief Vijaya Gadde, former head of trust & safety Yoel Roth, and a former member of the safety policy team, Anika Collier Navaroli — barely had time to say anything. Almost all of the politicians used up most of their own 5 minutes on their own grandstanding.
To the extent that they asked any questions (and this was, tragically, mostly true on both sides of the aisle, with only a few limited exceptions), they asked misleading, confused questions, and when any of the witnesses tried to clarify, or to express anything even remotely approaching nuance, the elected officials would steamroll over them and move on.
Nothing in the hearing was about finding out anything.
Nothing in the hearing was about exploring the actual issues and tradeoffs around content moderation.
Many of the Republicans wanted to just complain that their own tweets weren’t given enough prominence on Twitter. It was embarrassing. On the Democratic side, many of the Representatives (rightly) called out that the whole hearing was stupid nonsense, but that didn’t stop a few of them from pushing their own questionable theories, including the suggestion from Rep. Raskin (whose comments were mostly good, including calling out how obviously ridiculous the same panel would be if they called Fox News to explain its editorial choices) that Twitter’s failure to stop January 6th from happening was illegal or Rep. Bush’s suggestion that social media should be nationalized. On the GOP side, you had Rep. Boebert suggest that the panelists had broken the law in exercising their 1st Amendment rights, and multiple other Reps. insist over and over again — even as the panelists highlighted the contention was blatantly false — that Twitter deliberately suppressed the Biden laptop story.
Of course, if you’ve read Techdirt, you already know what the Twitter files actually showed, which was that the decision to block the links to that one story for one day was a mistake, but had nothing to do with politics, or pressure from Joe Biden or the FBI. But the hearing was extremely short on facts from the Representatives, who just kept repeating false claim after false claim.
But… the biggest reveal was actually that the Donald Trump White House demanded that Twitter remove a tweet from Chrissy Tiegen which Trump felt insulted by. Remember, in the original Twitter Files, Matt Taibbi had insisted that the Trump White House sent takedown demands to Twitter, but in all of the Twitter files since then, no one (not Taibbi or any of the others who got access) have said anything about what Trump wanted taken down. Instead, it was Navaroli who talked about how the Trump White House had complained about this tweet, and demanded Twitter take it down.
That tweet was in response to Trump whining that after he signed a Criminal Justice Reform bill he didn’t get enough credit. In the short four tweet rant, Trump mentions “musician @johnlegend, and his filthy mouthed wife, are talking now about how great it is – but I didn’t see them around when we needed help getting it passed.” Tiegen then responded as seen above.
And it actually sounds like Twitter did the same thing it does with every note from anyone — government official or other — and reviewed the tweet against its policies. Apparently, there was some sort of policy that would take down tweets if there were three insults in a tweet, and so they had to analyze if “pussy ass bitch” was three insults or one giant insult (or two? I dunno). Either way, it was determined that it didn’t meet the three insult threshold and remained on the site.
Still, this certainly raises the question: in all of the “Twitter Files,” where is the release of the details about Trump getting his panties in a bunch and demanding that Tiegen’s tweet get taken down?
Now, I’m expecting that all the people in our comments who have insisted that the FBI highlighting tweets that might violate actual policies is a Constitutional violation will now admit that the former President they worship also violated the Constitution under their understanding of it… or, nah?
Speaking of the former President, Navaroli also revealed yet another way in which Twitter bent over backwards to protect Trump and other Republicans. She relayed the discussion over a tweet by Trump, in which he suggested that Congressional Representatives of color, with whom he had policy disagreements should “go back and help fix the totally broken and crime infested places from which they came.”
At the time, Twitter’s policies had a rule against attacking immigrants, and even called out the specific phrase “go back to where you came from,” as violating that policy. Navaroli discussed how she flagged that tweet as violating the policy, but was overruled by people higher up on the team. And, soon after that, the policy was changed to remove that phrase as an example of a violation.
Now, there are arguments that could be made for why that particular tweet, in context, might not have truly violated the policy. There are also pretty strong arguments for why it did. Reasonable people can disagree, and I would imagine that there was some level of debate within Twitter. But to make that call and then soon after delete the phrase from the policy certainly suggests going the extra step not to “censor conservatives” but to give them extra leeway even as they violated the site’s policies repeatedly.
The whole thing was as parade of nonsense, and I even heard from a Republican Congressional staffer afterwards complaining about how the whole thing completely backfired on Republicans. They set out to “prove” that Twitter conspired with the US deep state to censor the Hunter Biden laptop story. And, in the end, the witnesses quite effectively debunked each point of that, while instead the key takeaway was that Trump demanded a tweet insulting himself be taken down, and Twitter explicitly changed its rules to protect Trump after he violated the rules.
Just a total shitshow all around.
But, at least I got to watch it at 3x speed.
Filed Under: 1st amendment, chrissy tiegen, content moderation, donald trump, editorial discretion, fbi, grandstanding, house oversight committee, hunter biden laptop, james baker, james comer, lauren boebert, vijaya gadde, yoel roth
Companies: twitter
Supreme Court Punts On Florida And Texas Social Media Moderation Laws, Asks US Government To Weigh In
from the kick-the-can dept
Lots of people were expecting the Supreme Court to obviously agree to take the appeals of Florida’s and Texas’s social media content moderation laws. As you’ll probably recall, both Texas and Florida passed slightly different laws that effectively said that they could bar social media platforms from moderating certain types of content. Both laws were tossed out as easily and obviously unconstitutional limitations of social media companies’ 1st Amendment editorial and association rights.
Both states appealed to their local appeals courts. The 11th Circuit (in a decision written by a Trump-appointed judge) upheld the lower court ruling (mostly) and again highlighted how obviously unconstitutional Florida’s law was. The 5th Circuit, on the other hand, first reinstated Texas’s law with no explanation whatsoever (literally, there was no ruling, beyond saying that the law should be in effect immediately), leading to a rush to the Supreme Court which put the law back on hold. Months later, the 5th Circuit released an absolutely batshit crazy ruling that required effectively rewriting a century’s worth of 1st Amendment jurisprudence.
Both states appealed to the Supreme Court, and basically everyone expected the Court to take the cases (and combine them). After all, it was an issue that multiple Justices had been asking for cases about, in a situation where you had a very clear circuit split between the appeals courts, on a hot and meaningful issue regarding social media content moderation.
But, on Monday morning something slightly odd happened. The Supreme Court punted. It asked the US Solicitor General to weigh in on the issue:
Why would it do that? It seems like there’s nothing that the US government could say that should or would impact the Supreme Court’s reasoning in taking (or, I guess, not taking?) these cases.
Constitutional scholar Steve Vladeck notes that this likely is just a stalling tactic by the Supreme Court.
This almost certainly means that the case about the laws won’t be heard this session but will, instead, wait until next session — meaning that we might not get a ruling on them until 2024.
Of course, it’s not clear why they’re stalling. My only guess is that the Justices know that they’re already handling the Gonzalez/Taamneh cases this session, which are tangentially related. And while both cases involve very different issues and could be decided independently of each other, perhaps the Justices worry that the ruling they come to in Gonzalez/Taamneh will somehow impact the NetChoice/CCIA line of cases against state laws? That’s just idle speculation, but it’s the only thing that makes any sense to me. I mean, I guess they could think that if they’re going to burn down the open internet, they can do it across two separate years?
As for the US Solicitor General, it’s already unclear what they’re going to say, but I’m a bit nervous about it. I have a half written post that may never be finished about the SG’s amicus briefs in both Taamneh and Gonzalez and they’re… not great. The one in Taamneh is fine, I guess, and makes the obvious argument that the case is dumb and easily dismissible for reasons unrelated to Section 230. The Gonzalez brief, however, is completely disconnected from reality, and raises questions about how much the Solicitor General’s office actually understands about issues related to content moderation. And, because of that, it’s a little scary whenever they’re asked to weigh in on something related to the internet.
I guess we’ll find out…
Filed Under: 1st amendment, content moderation, editorial discretion, florida, laws, solicitor general, supreme court, texas
Companies: ccia, netchoice
Yes, Elon Musk Is Fucking Up Twitter; But No, The Government Has No Business Getting Involved
from the not-how-any-of-this-works dept
So, yes, I’ve written a few things now on Elon’s silly excuses for his frantic speedrun through the content moderation learning curve. It’s getting more mainstream press because of journalist accounts getting banned (including, this morning, Insider’s Linette Lopez, who did not post any “doxing” info but has reported critically on Musk for years, which lead to him harassing her).
And while Musk’s fans have been (hilariously, frankly) trying to defend these decisions by (1) claiming this is somehow “different” because it’s about “safety” — an argument we cleanly debunked this morning — and (2) saying it’s okay because the “liberal” media are now screaming about censorship and free speech, so it’s all hilarious since everyone is switching positions. Except, I haven’t seen much of that supposed “switch.” Lots of people are pointing out that the reasons stated for these suspensions have been silly. And many more people are highlighting how hypocritical the statements and decisions made by Musk are. But most people readily recognize that he has every right to make dumb and hypocritical decisions.
There are a few, however, who do seem to be taking it further. And they should stop, because it’s nonsense. First up we have the EU, where the VP of the European Commission, Vera Jourova, is warning Musk that there will be consequences.
That’s her saying:
News about arbitrary suspension of journalists on Twitter is worrying. EU’s Digital Services Act requires respect of media freedom and fundamental rights. This is reinforced under our #MediaFreedomAct. @elonmusk should be aware of that. There are red lines. And sanctions, soon.
But being banned from private property doesn’t impact “media freedom or fundamental rights.” And it’s silly for Jourova to claim otherwise. No one has a “right” to be on Twitter. And even if the journalism bans are pathetic and silly (and transparently vindictive and petty) that doesn’t mean he’s violated anyone’s rights.
Some in the US are making similar claims, even though the 1st Amendment (backed up by Section 230) clearly protects Musk’s ability to ban whoever he wants for any reason whatsoever. Yet Jason Kint, the CEO of Digital Context Next, a trade organization of “digital media companies” — but which, in practice, often seems notably aligned with the desires of Rupert Murdoch’s news organizations — demanded Congressional hearings if Musk did not “fix this within an hour” (referencing the journalist suspensions).
But that’s silly. Again, his decisions are protected by the 1st Amendment. It’s his property. He can kick anyone out. Just like Fox News can choose not to put anyone on air who would call bullshit on “the big lie” or Rupert Murdoch. That’s their editorial freedom.
And I’d bet that if Congress hauled Lachlan Murdoch in for a hearing to demand he explain to them his editorial decision making practices for Fox News, Kint would be highlighting the massive 1st Amendment-connected chilling effects this would have on any of his member news organizations.
We can mock Musk’s decisions. We can highlight how nonsensical they are. We can pick apart his excuses and the ramblings of his fans and point out how inconsistent they are. But Musk has every right to do this, and that’s exactly how it should be. Getting government involved with editorial decisions leads down a dangerous road.
Filed Under: 1st amendment, congress, editorial discretion, elon musk, eu, free speech, jason kint, journalists, section 230, social media, vera jourova
Companies: twitter
Fourth Circuit Goes On The Attack Against Section 230 In A Lawsuit Over Publication Of Third Party Data
from the immunity-applies,-unless-the-judge-doesn't-like-Section-230 dept
Prior to the turn of the century, the Fourth Circuit Court of Appeals handed down a formative decision that helped craft the contours of Section 230 immunity. The case — Zeran v. America Online — dealt with a tricky question: whether or not a platform’s failure to moderate content (in this case, posts that contained Zeran’s phone number and oblique accusations he approved of the Oklahoma City federal building bombing) made the platform liable for the user-generated content.
The Appeals Court didn’t have much to work with at that point. Section 230 of the Communications Decency Act was less than a year old at the time Kenneth Zeran filed his lawsuit against AOL. Nevertheless, the court recognized AOL’s immunity from the suit. And it did this by applying the new CDA clause retroactively to apply to alleged wrongs against Zeran that were committed nearly a year before the CDA went into effect.
Flash-forward more than a quarter century, and the Fourth Circuit Court of Appeals has delivered another potentially groundbreaking decision [PDF], albeit one that goes the other direction to hold a website directly responsible for content posted by others.
In this case, the plaintiffs and their class action lawsuit sought to hold Public Data, LLC directly responsible for content it gathers from other sources and republishes on its own platforms. The data collected by Public Data (the name the court uses to collectively refer to the multiple parties collecting and disseminating this data) comes from public records. This includes civil and criminal court proceedings, voting records, driver data, professional licensing information, and anything else generated by government agencies Public Data can hoover up.
It then consolidates the data to make it more user-friendly, reformatting raw documents to deliver snapshots of people Public Data’s customers wish to obtain info about. The plaintiffs allege this process often removes exonerative data about criminal charges and reduces criminal background info to little more than a list of charges (with no data on whether these charges resulted in a conviction). Making things worse, Public Data’s summaries apparently include “glib statements” that misrepresent the entirety of the data collected from public sources. Public Data makes it clear to users that it is not responsible for any inaccuracies in the raw data it collects and collates. It also refuses to correct incorrect data or remove any inaccurate information it has scraped from public records databases.
The value of this information — no matter how inaccurate or incomplete — is undeniable. The plaintiffs note that Public Data has nearly 50 million customers. And presumably few of those customers take the repackaging of scraped data — with or without Public Data’s commentary — with a grain of salt, despite the cautionary statements issued by Public Data.
This all sounds like the collection and republication of data generated by third-parties. What is or isn’t passed onto users sounds pretty much like content moderation, something that’s not only protected by Section 230 immunity, but the First Amendment as well. Since those are the most obvious impediments to the lawsuit, the plaintiffs have chosen to frame this repackaging of data (and Public Data’s editorial decisions) as violations of the Fair Credit Reporting Act (FCRA).
The Appeals Court should have recognized this tactic for what it was. Instead, it decides the acts the lawsuit is predicated on are somehow exempt from Section 230 immunity. Eric Goldman, who has a long history of covering Section 230 cases and advocating for its continued existence, saw this disturbing decision coming a long time ago, back when the lower court decided Section 230 immunity applied but did so in a way that invited novel interpretations of this protection.
The plaintiff sued the defendant for violating four provisions of the Fair Credit Reporting Act (FCRA). At its core, the FCRA is in tension with Section 230 because it seemingly regulates the dissemination of third-party content (i.e., the credit data provided by vendors). However, many FCRA provisions are ministerial in nature about how to operate a credit reporting agency, and those provisions may not specifically base liability for credit report dissemination even if the overall statutory objective relates to that output. This makes the FCRA/230 interface ambiguous and confusing.
The district court dismissed the lawsuit on Section 230 grounds in a garbled and controversial opinion. As I predicted then, the district court’s “distorted Section 230 test makes this ruling vulnerable on appeal or in further proceedings.” And here we are.
And that’s what has happened here. Ignoring its own Zeran precedent, the court starts imposing new rules on Section 230 immunity, utilizing the plaintiff’s Fair Credit Reporting Act allegations as the baseline. The court says there’s a possible defamation claim here because it apparently believes liability attaches to publishers of third-party content when the republished content is “improper.”
That’s definitely wrong. And it sounds like the court wants to believe there’s a publisher/platform dichotomy that makes Section 230 irrelevant, even though the law itself makes no such distinction when it comes to content created by third parties.
To arrive at this tortured conclusion, the court says a whole lot about the Fair Credit Reporting Act, which does indeed pose penalties on reporting agencies that aren’t careful about ensuring the accuracy of the information they collect or disseminate. But that has little to do with Public Data’s business model, which simply collects public records from public sites to find relevant information about people its customers wish to know more about.
That isn’t the same thing and the court should know this. Instead, it seemingly decides that FCRA reporting requirements apply and Section 230 doesn’t. And then it says things about the commentary Public Data attaches to public record data, claiming that this goes beyond protected editorial functions and turns Public Data into a culpable partner in the dissemination of false information. Here’s Goldman, summarizing this particularly dangerous conclusion:
The court summarizes its legal standard as:
an interactive computer service is not responsible for developing the unlawful information unless they have gone beyond the exercise of traditional editorial functions and materially contributed to what made the content unlawful.
This adopts an obvious false dichotomy. Materially contributing to third-party content is a “traditional editorial function,” so this distinction is incoherent. This legal standard invites plaintiffs to define “traditional editorial functions” to exclude whatever defense behavior they are targeting. The chaos is palpable. (The scope of “traditional editorial functions” is a question presented in Gonzalez, so the Supreme Court will almost certainly make a worse hash of this term by June).
If you don’t see the problem here, Goldman explains further:
When a 230 defendant republishes third-party content verbatim and without redaction, this standard is fine. But 230 defendants routinely extract pieces of a third-party submission–sometimes as promotional previews, sometimes to fit publication constraints. 230(c)(1) has applied in so many cases fitting that paradigm (People v. Ferrer represents an outer extreme), yet future plaintiffs can argue that any piece excluded from the extract creates a deceptive omission and VOILA! Bye bye 230.
So, for example, Google’s search results descriptions republishes extracts from the source website. This has qualified for Section 230 (e.g., O’Kroley v. Fastcase). This court is saying that 230 would not apply if the plaintiff claims the search results description left behind contextualizing information, which will happen ALL THE TIME. Boom–all of Google search descriptions are now potentially outside Section 230.
That’s where the Fourth Circuit Appeals court leaves, practically daring Public Data to appeal the decision and place it in the hands of a Supreme Court that is far too willing these days to upend rights and protections recently appointed justices just don’t care for. Section 230 immunity is on the Supreme Court’s shit list at the moment, and this decision feeds into desire to cherry-pick cases that it can use to overturn decades of jurisprudence just because it no longer cares for its own precedent.
Filed Under: 4th circuit, editorial discretion, fcra, intermediary liability, public data, section 230
Companies: public data
Rep. Cathy McMorris Rodgers And Deeply Unfunny ‘Satirist’ Seek To Remove Website 1st Amendment Rights To ‘Protect Free Speech’
from the that-doesn't-seem-right dept
Rep. Cathy McMorris Rodgers, who heads something called the “House Republican Big Tech Task Force” has teamed up with Seth Dillon, the CEO of the deeply unfunny “conservative” Onion wannabe, The Babylon Bee, to whine in the NY Post about “how to end big tech censorship of free speech.” The answer, apparently, is to remove the 1st Amendment. I only wish I were joking, but that’s the crux of their very, very confused suggestion.
Let’s start with the basics: Dillon’s site regularly posts culture-war promoting satire. Because Republican culture wars these days are about shitting on anyone they dislike, or who dares to suggest that merely respecting others is a virtue, many of those stories are not just deeply unfunny, but often pretty fucked up. None of this is surprising, of course. But, the thing about the modern GOP and its culture wars is that it’s entirely based around pretending to be the victim. It’s about never, not once, being willing to take responsibility for your own actions.
So, when the Babylon Bee publishes something dumb that breaks a rule, and they get a minor slap on the wrist for it, they immediately flop down on the ground like a terrible soccer player and roll around about how their free speech has been all censored. It hasn’t. You’re relying on someone else’s private property. They get to make the rules. And if they decide that you broke their rules, they get to show you the door (or whatever other on-site punishment) they feel is appropriate. This is pretty basic stuff, and actually used to be conservative dogma: private property rights, the rights to freely associate — or not — with whoever you want under the 1st Amendment, and accepting personal responsibility when you fuck around, were things we were told were core to being a conservative.
No longer (it’s arguable, of course, if they were ever actually serious about any of that).
There is no free speech issue here. The Babylon Bee has 1st Amendment rights to publish whatever silly nonsense it wants on its own site. It has no right to demand that others host its speech for it. Just as the Babylon Bee does not need to post my hysterically funny satire about Seth Dillon plagiarizing his “best” jokes by running Onion articles three times through GPT3 AI with the phrase “this, but for dumb rubes.” That’s freedom of association, Seth. That’s how it works.
Perhaps its no surprise that the CEO of a “what if satire were shitty” site doesn’t understand the 1st Amendment, but you’d think that a sitting member of Congress, who actually swore to protect and uphold the Constitution, might have a better idea. Not so for Rep. McMorris Rodgers, who once actually was decent on tech, before apparently realizing that her constituents don’t like elected officials from reality, and prefer them to be culture warriors as well.
Anyway, after whining about facing a tiny bit of personal responsibility — including, I shit you not, having to be fact checked by Facebook (note to the two of you: fact checking is more speech, it’s not censorship, you hypocritical oafs) — they trot out their “solutions.”
Big Tech must be held accountable. First, we propose narrowing Section 230 liability protections for Big Tech companies by removing ambiguity in the law — which they exploit to suppress and penalize constitutionally protected speech. Our proposal ensures Big Tech is no longer protected if it censors individuals or media outlets or removes factually correct content simply because it doesn’t fit its woke narrative.
I mean, holy fuck. There is no excuse in the year 2022 to still be so fucking ignorant of how Section 230 works. Especially if you’re in Congress. Narrowing Section 230’s liability protections won’t lead to less moderation. It will lead to more. The liability protections are what allow websites to feel comfortable hosting 3rd party content. The case that caused Section 230 in the first place, involved Prodigy being held liable for comments in a forum. If you make sites more liable, they are less likely to host whatever nonsense content you want to share on their website.
Second, removing “factually correct content” whether or not it “fits its woke narrative” (and, um, no big tech company has a “woke narrative”) is… protected by the 1st Amendment. Content moderation is protected by the 1st Amendment. Dillon doesn’t have to publish my unfunny piece. Twitter doesn’t need to publish his unfunny piece. Facebook can fact check all it wants — even if it gets the facts wrong. It’s all thanks to the 1st Amendment.
Taking away 230 protections doesn’t change that — it just makes websites even LESS likely to host is culture war nonsense.
But McMorris Rodgers and Dillon aren’t done yet.
Second, we propose requiring quarterly filings to the Federal Trade Commission to keep Big Tech transparent about content moderation. This will allow Congress, the FTC and Americans to know when and why these companies censor content to determine whether it’s justified. We’d also sunset Section 230 protections after five years, so Congress can reevaluate them if necessary and incentivize Big Tech to treat all content fairly or have their protections revoked.
Again, this is almost certainly unconstitutional. I know some people struggle with the idea of why transparency requirements are an affront to the 1st Amendment, but it’s pretty straightforward. If Congress ordered Seth Dillon to file his site’s editorial policies, including details about what stories they reject and which they promote “to determine whether its justified” for the site to make those editorial decisions, pretty much everyone would recognize the 1st Amendment concerns.
Demanding anyone justify editorial decisions by filing reports with the government to “determine whether [those editorial decisions are] justified” is just a blatant attack on free speech and the 1st Amendment.
Sunsetting Section 230 just takes us back to the issue we noted above. Without liability protections, websites are MORE likely to remove content to avoid liability, not less.
This isn’t like some big secret. Perhaps Dillon and McMorris Rodgers only get their news from sites like the Babylon Bee, and that helps them not understand how anything works. But, really, that’s no excuse.
Third, our proposal requires Big Tech to improve appeals processes for users to challenge moderation decisions and enables people to petition their state’s attorney general to bring legal action against Big Tech, enhancing users’ power to challenge censorship. Twitter would be required to notify a user, like the Babylon Bee, through direct communication before taking any censorship action. Big Tech would also be required to give users the option to challenge any censorship decisions with a real person — not a bot — to disincentivize Big Tech from completely automating its censorship process.
Right, so again, all of that is an affront to the 1st Amendment. Should I be able to petition my state’s attorney general to bring legal action against the Babylon Bee for failing to publish my truly hilarious article about how Cathy McMorris Rodgers hates the internet so much, she pushed legislation banning communities from building their own broadband networks (really funny stuff, because it’s true).
Of course not. The 1st Amendment protects websites and their editorial decisions. There is no constitutional cause of action any attorney general could take against a website for their moderation decisions.
As for the appeals process — most websites have one. But mandating one would, again, raise serious constitutional issues, as it’s the government interfering with the editorial process.
And, note, of course, that none of these complaints address the fact that the social media sites that people like Dillon like, including Parler, Gettr, and Truth Social, have far more arbitrary and aggressive content moderation policies (even as they pretend otherwise).
It’ll be hilarious — even Babylon Bee worthy, if I say so myself — if this bill passes, and woke liberals use it to sue Truth Social for taking down truthful content about the January 6th hearings. C’mon, Seth, let me publish that as an article on your site! Or you hate freedom of speech!
Free speech must be cherished and preserved. It’s time Big Tech companies uphold American values and become fair stewards of the speech they host.
But the Babylon Bee remains free to be as shitty as before? How is that fair?
Filed Under: 1st amendment, cathy mcmorris rodgers, content moderation, editorial discretion, free speech, section 230, seth dillon
Companies: babylon bee
New York Passes Ridiculous, Likely Unconstitutional, Bill Requiring Websites To Have ‘Hateful Conduct’ Policies
from the i-hate-this dept
Okay, so this bill is nowhere near as bad as the Texas and Florida bills, or a number of other bills out there about content moderation. But that doesn’t mean it’s still not pretty damn bad. New York has passed a variation of a content moderation bill in that state that requires websites to have a “hateful conduct policy.”
The entire bill is pretty short and sweet, but the basics are what I said above. It has a very broadly defined hateful conduct definition:
“Hateful conduct” means the use of a social media network to vilify, humiliate, or incite violence against a group or a class of persons on the basis of race, color, religion, ethnicity, national origin, disability, sex, sexual orientation, gender identity or gender expression.
Okay, so first off, that’s pretty broad, but also most of that speech is (whether you like it or not) protected under the 1st Amendment. Requiring websites to put in place editorial policies regarding 1st Amendment protected speech raises… 1st Amendment concerns, even if it is left up to the websites what those policies are.
Also, the drafters of this law are trying to pull a fast one on people. By calling it “hateful conduct” rather than “hateful speech,” they’re trying to avoid the 1st Amendment issue that is obviously a problem with this bill. You can regulate conduct but you can’t regulate speech. Here, the bill tries to pretend it’s regulating conduct, but when you read the definition, you realize it’s only talking about speech.
So, yes, in theory you can abide by this bill by putting in place a “hateful conduct” policy that says “we love hateful conduct, we allow it.” But, obviously, the intent of this bill is to use the requirements here to pressure companies into removing speech that is likely protected under the 1st Amendment. That’s… an issue.
Also, given that the definition is somewhat arbitrary, what’s to stop future legislators from expanding the definition. We’ve already seen efforts in many places to make speaking negatively about the cops into “hate speech.”
Next, the law applies to “social media networks” but here, again, the definition is incredibly broad:
“Social media network” means service providers, which, for profit-making purposes, operate internet platforms that are designed to enable users to share any content with other users or to make such content available to the public.
There appear to be no size qualifications whatsoever. So, one could certainly read this law to mean that Techdirt is a “social media network” under the law, and we may be required to create a “hateful conduct” policy for the site or face a fine. But, the moderation that takes place in the comments is not policy driven. It’s community driven. So, requiring a policy makes no sense at all.
And now that’s also a big issue. Because if we’re required to create a policy, and we do so, but it’s our community that decides what’s appropriate, that means that the community might not agree with the policy, and might not follow what’s in the policy. What happens then? Are we subject to consumer protection fines for having a “misleading” policy?
At the very least, New York State pretty much just guaranteed that small sites like ours need to find and pay a lawyer in New York to tell us what we can do to avoid liability.
Do I want hateful conduct on the site? No. But we’ve created ways of dealing with it that don’t require a legally binding “hateful conduct” policy. And it’s absolutely ridiculous (and just totally disconnected from how the world works) to think that forcing websites to have a “hateful conduct” policy will suddenly make sites more aware of hateful conduct.
The whole thing is political theater, disconnected from the actual realities of running a website.
And that’s made especially clear by the next section:
A social media network that conducts business in the state, shall provide and maintain a clear and easily accessible mechanism for individual users to report incidents of hateful conduct. Such mechanism shall be clearly accessible to users of such network and easily accessed from both a social media networks’ application and website, and shall allow the social media network to provide a direct response to any individual reporting hateful conduct informing them of how the matter is being handled.
So, now every website has to build in special reporting mechanisms, that might not match with how their site actually works? We have the ability to fill out a form and alert us to things, but we also allow people to submit those reports anonymously. As far as I can tell, we might not be able to do that under this law, because we have to be able to “provide a direct response” to anyone who reports information to us. But how do we do that if they don’t give us their contact info? Do we need to build in a whole separate messaging tool?
Each social media network shall have a clear and concise policy readily available and accessible on their website and application which includes how such social media network will respond and address the reports of incidents of hateful conduct on their platform.
Again, this makes an implicit, and false, assumption that every website that hosts user content works off of policies. That’s not how it always works.
The drafters of this bill then try to save it from constitutional problems by pinky swearing that nothing in it limits rights.
Nothing in this section shall be construed (a) as an obligation imposed on a social media network that adversely affects the rights or freedoms of any persons, such as exercising the right of free speech pursuant to the first amendment to the United States Constitution, or (b) to add to or increase liability of a social media network for anything other than the failure to provide a mechanism for a user to report to the social media network any incidents of hateful conduct on their platform and to receive a response on such report.
I mean, sure, great, but the only reason to have a law like this is as a weak attempt to force companies to take down 1st Amendment protected speech. But then you add in something like this to pretend that’s not what you’re really doing. Yeah, yeah, sure.
The enforcement of the law is at least somewhat limited. Only the Attorney General can enforce it… but remember, this is in a state where we already have an Attorney General conducting unconstitutional investigations into social media companies, as a blatant deflection from anyone looking to closely at the state’s own failings in stopping a mass shooting. The fines from violating the law are capped at $1,000 per day, which would be nothing for larger companies, but could really hurt smaller ones.
Even if you agree with the general sentiment that websites should do more to remove hateful speech on their sites, that still should make you very concerned about this bill. Because if states like NY can require this of websites, other states can require other kinds of policies, and other concepts to be put in place regarding content moderation.
Filed Under: content moderation, editorial discretion, free speech, hate speech, hate speech policy, hateful conduct, hateful conduct policy, new york
Twitter Fights Back Against January 6th Committee’s Dangerously Intrusive Subpoena Demands
from the good-for-them dept
With the whole Congressional January 6th Committee effort moving into prime time this week, this is probably pretty far down on the list of issues around it, but apparently Twitter is quietly fighting demands from the Committee to reveal internal communications.
The social media giant is asserting a First Amendment privilege to push back on the panel’s demand for communications about moderating tweets related to the Capitol insurrection.
Twitter’s pushback, the sources say, has caused consternation among the committee, whose members believe the internal communications would help them paint a fuller, more accurate portrait of how online MAGA extremism contributed to the day’s violence and mayhem. But the fact that the company is being asked for its internal communications at all raises tricky issues about the balance between free expression and the government’s authority to investigate an attempt to subvert democracy. And it shows just how wide Congressional investigators have been willing to cast their net in the run-up to their primetime hearings, which begin this week.
As you may recall, we talked about this issue last summer, when the Committee first sent information demands to a long list of social media companies, trying to understand the policies those sites had in dealing with extremism on their sites. At the time, we said that the companies should resist those demands, in part because some of the information that was being requested seemed very likely to lead to Congress threatening and intimidating companies over their 1st Amendment protected editorial decision making.
And that definitely appears to be the case. From the article, it appears that at least some of the companies did resist, leading Congress to step things up from a mere request for information to an actual subpoena.
The panel escalated the issue in January and sent formal subpoenas to Twitter, Meta, Google, and Reddit in January, citing “inadequate responses to prior requests for information” by the companies.
Chairman Rep. Bennie Thompson lamented that the committee “still do[es] not have the documents and information necessary to answer” questions about how the tech companies handled January 6th-related communications, more than four months after its initial request.
With Twitter, at least, the demands seem deeply intrusive on basic editorial decision making, including some fairly basic internal communications.
In a letter accompanying the subpoena sent to Twitter CEO Parag Agrawal in January, Thompson demanded three specific categories of documents that he said Twitter had failed to turn over to the committee. They included “documents relating to warnings [Twitter] received regarding the use of the platform to plan or incite violence on January 6th” and an alleged failure to “even commit to a timeline” to send over “internal company analyses of misinformation, disinformation, and malinformation relating to the 2020 Election.”
Thompson also faulted Twitter for failing to turn over material concerning Twitter’s decision to suspend Trump from the platform two days after the insurrection—a decision set for reversal should billionaire Elon Musk complete his planned acquisition of Twitter.
The demands are asking for some internal communications, including internal Slack messages about moderating tweets regarding the invasion of the Capitol on January 6th. Even if you agree with the work of the Committee, this should greatly concern you for multiple reasons. First, opening up internal communications regarding editorial decisions raises all sorts of 1st Amendment issues. Imagine Congress similarly demanding internal editorial discussions for Fox News or the NY Times or CNN. People would be rightly concerned about that.
Second, Slack messages, in particular, are designed to be more informal. It’s quite easy to see how Slack messages, taken out of context, could be completely misrepresented to suggest something inaccurate.
Third, just the idea that such internal messages may go before a Congressional Committee (and potentially be released to the public) creates massive, stunning, chilling effects on the kinds of internal discussions that trust and safety teams need to have all the time as they figure out how to deal with dynamic, rapidly changing scenarios. It’s important that teams be able to communicate freely and discuss different ideas and perspectives about how to deal with different challenges, and if they’re constantly worried about how those messages will look when someone in Congress reads them from the floor, that’s going to create huge chilling effects, and make it much more difficult for people at these companies to do their job.
Finally, just think about how this power will be abused. If you support the Jan. 6 Committee, it’s quite likely that the Republicans will be the majority in Congress next year. Do you think they should then be able to demand these same internal communications from Twitter, Facebook, and others about their moderation choices? Do you honestly think that they will do so in good faith, and not to try to pressure and intimidate these companies into leaving up partisan propaganda and nonsense?
And, if you’re on the other side, and believe that the Jan. 6th Committee is a fraud on the American public, then you should already be concerned about these demands, but think about how those same powers might be used against companies you like. Should Congress be able to investigate Fox News or OAN’s internal editorial meetings and decisions about who they’ll put on air? Should Congress be able to subpoena Truth Social to find out why it’s blocking critics of President Trump?
There are some things Congress should not be able to do, and that includes interfering with the editorial choices of companies. It’s offensive to the 1st Amendment.
Filed Under: bennie thompson, committee, content moderation, editorial discretion, fishing expedition, insurrection, january 6th
Companies: twitter