Mike Masnick (original) (raw)
Posted on Techdirt - 18 November 2024 @ 01:33pm
NetChoice Sues California Once Again To Block Its Misguided ‘Social Media Addiction’ Bill
Earlier this year, California passed SB 976, yet another terrible and obviously unconstitutional bill with the moral panicky title “Protecting Our Kids from Social Media Addiction Act.” The law restricts minors’ access to social media and imposes burdensome requirements on platforms. It is the latest in a string of misguided attempts by California lawmakers to regulate online speech “for the children.” And like its predecessors, it is destined to fail a court challenge on First Amendment grounds.
The bill’s sponsor, Senator Nancy Skinner, has a history of relying on junk science and misrepresenting research to justify her moral panic over social media. Last year, in pushing for a similar bill, Skinner made blatantly false claims based on her misreading of already misleading studies. It seems facts take a backseat when there’s a “think of the children!” narrative to push.
The law builds on the Age Appropriate Design Code, without acknowledging that much of that law was deemed unconstitutional by an appeals court earlier this year (after being found similarly unconstitutional by the district court last year). This bill, like a similar one in New York, assumes (falsely and without any evidence) that “algorithms” are addictive.
As we just recently explained, if you understand the history of the internet, algorithms have long played an important role in making the internet usable. The idea that they’re “addictive” has no basis in reality. But the law insists otherwise. It would then ban these “addictive algorithms” if a website knows a user is a minor. It also has restrictions on when notifications can be sent to a “known” minor (basically no notifications during school hours or late at night).
There’s more, but those are the basics.
NetChoice stepped up and sued to block this law from going into effect.
California is again attempting to unconstitutionally regulate minors’ access to protected online speech—impairing adults’ access along the way. The restrictions imposed by California Senate Bill 976 (“Act” or “SB976”) violate bedrock principles of constitutional law and precedent from across the nation. As the United States Supreme Court has repeatedly held, “minors are entitled to a significant measure of First Amendment protection.” Brown v. Ent. Merchs. Ass’n, 564 U.S. 786, 794 (2011) (cleaned up) (quoting Erznoznik v. Jacksonville, 422 U.S. 205, 212-13 (1975)). And the government may not impede adults’ access to speech in its efforts to regulate what it deems acceptable for minors. Ashcroft v. ACLU, 542 U.S. 656, 667 (2004); Reno v. ACLU, 521 U.S. 844, 882 (1997). These principles apply with equal force online: Governments cannot “regulate [‘social media’] free of the First Amendment’s restraints.” Moody v. NetChoice, LLC, 144 S. Ct. 2383, 2399 (2024).
That is why courts across the country have enjoined similar state laws restricting minors’ access to online speech. NetChoice, LLC v. Reyes, 2024 WL 4135626 (D. Utah Sept. 10, 2024) (enjoining age-assurance, parental-consent, and notifications-limiting law); Comput. & Commc’n Indus. Ass’n v. Paxton, 2024 WL 4051786 (W.D. Tex. Aug. 30, 2024) (“CCIA”) (enjoining law requiring filtering and monitoring of certain content-based categories of speech on minors’ accounts); NetChoice, LLC v. Fitch, 2024 WL 3276409 (S.D. Miss. July 1, 2024) (enjoining ageverification and parental-consent law); NetChoice, LLC v. Yost, 716 F. Supp. 3d 539 (S.D. Ohio 2024) (enjoining parental-consent law); NetChoice, LLC v. Griffin, 2023 WL 5660155 (W.D. Ark. Aug. 31, 2023) (enjoining age-verification and parental-consent law).
This Court should similarly enjoin Defendant’s enforcement of SB976 against NetChoice members
As we’ve discussed, the politics behind challenging these laws makes it a complex and somewhat fraught process. So I’m glad that NetChoice continues to step up and challenge many of these laws.
The complaint lays out that the parental consent requirements in the bill violate the First Amendment:
The Act’s parental-consent provisions violate the First Amendment. The Act requires that covered websites secure parental consent before allowing minor users to (1) access “feed[s]” of content personalized to individual users, § 27001(a); (2) access personalized feeds for more than one hour per day, § 27002(b)(2); and (3) receive notifications during certain times of day, § 27002(a). Each of these provisions restricts minors’ ability to access protected speech and websites’ ability to engage in protected speech. Accordingly, each violates the First Amendment. The Supreme Court has held that a website’s display of curated, personalized feeds is protected by the First Amendment. Moody, 144 S. Ct. at 2393. And it has also held that governments may not require minors to secure parental consent before accessing or engaging in protected speech. Brown, 564 U.S. at 799;
So too do the age assurance requirements:
The Act’s requirements that websites conduct age assurance to “reasonably determine” whether a user is a minor, §§ 27001(a)(1)(B), 27002(a)(2), 27006(b)-(c), also violate the First Amendment. Reyes, 2024 WL 4135626, at 16 n.169 (enjoining age-assurance requirement); Fitch, 2024 WL 3276409, at 11-12 (enjoining age-verification requirement); Griffin, 2023 WL 5660155, at *17 (same). All individuals, minors and adults alike, must comply with this age-assurance requirement—which would force them to hand over personal information or identification that many are unwilling or unable to provide—as a precondition to accessing and engaging in protected speech. Such requirements chill speech, in violation of the First Amendment. See, e.g., Ashcroft, 542 U.S. at 673; Reno, 521 U.S. at 882.
It also calls out that there’s an exemption for consumer review sites (good work, Yelp lobbyists!), which highlights how the law is targeting specific types of content, which is not allowed under the First Amendment.
California Attorney General Rob Bonta insisted in a statement to GovTech that there are no First Amendment problems with the law:
“SB976 does not regulate speech,” Bonta’s office said in an emailed statement. “The same companies that have committed tremendous resources to design, deploy, and market social media platforms custom-made to keep our kids’ eyes glued to the screen are now attempting to halt California’s efforts to make social media safer for children” the statement added, saying the attorney general’s office would respond in court.
Except he said that about the Age Appropriate Design Code and lost in court. He said that about the Social Media Transparency bill and lost in court. He said that about the recent AI Deepfake law… and lost in court.
See a pattern?
It would be nice if Rob Bonta finally sat down with actual First Amendment lawyers and learned how the First Amendment worked. Perhaps he and Governor Newsom could take that class together so Newsom stops signing these bills into law?
Wouldn’t that be nice?
Posted on Techdirt - 18 November 2024 @ 09:32am
Elon Musk, Who Now Claims Boycotts Are Illegal, Happily Joined The #DeleteFacebook Boycott Himself
Elon Musk’s recent claims that corporate boycotts of social media platforms are criminal reek of hypocrisy, given his own eagerness to join the #DeleteFacebook boycott just a few years ago.
In the wake of the Cambridge Analytica scandal, Musk publicly supported the #DeleteFacebook campaign, even going so far as to remove the official SpaceX and Tesla pages from the platform. Yet now, as the owner of ExTwitter, he’s singing a very different tune — suing advertisers who choose to boycott his platform over content moderation concerns.
The blatant double standard is notable, if not surprising. Musk was happy to wield the power of the boycott when it suited his interests and let him mock his rival, Mark Zuckerberg. But now he condemns the tactic as criminal when turned against him. This “rules for thee, but not for me” attitude deserves to be called out even if he and his supporters will happily ignore the rank hypocrisy.
Earlier this year, Elon sued GARM — the “Global Alliance for Responsible Media” — a tiny non-profit that sought to advise brands on how to advertise safely on social media in a manner that (1) wouldn’t tarnish their own brands, and (2) was generally better for the world. GARM had no power and didn’t demand or order any company to do anything. It just worked with advertisers to try to establish some basic standards and to advocate that social media companies try to live up to those basic standards in how they handled moderation.
As we noted, just weeks before Elon sued GARM, ExTwitter had “excitedly” rejoined GARM, knowing that many advertisers trusted its opinion on determining where they should focus their ad spend.
But it seems clear that Elon felt differently. After a very misleading report was put out by Jim Jordan, Elon declared war on GARM and sued a bunch of advertisers. In response, GARM was shut down.
Musk and his friends are now going around saying that participating in an organized boycott of social media is criminal. Right around the time he sued, Musk suggested such a boycott might just be “RICO”:
And, as we just discussed, here’s Musk-backer and friend, Marc Andreessen, claiming that such boycotts are criminal.
However, my cohost on Ctrl-Alt-Speech called out in last episode that Elon Musk himself was quite happy to support a similar boycott not all that long ago.
After the Cambridge Analytica scandal, in which Facebook data was used to try to influence voters to vote for Donald Trump (yes, this is ironic, given what Elon did with ExTwitter), some activists kicked off a boycott campaign called #DeleteFacebook.
Elon Musk showed some interest in the campaign by joking to someone “What’s Facebook?” in response to a (now deleted) tweet about the campaign. Some users then challenged him to join the #DeleteFacebook campaign by removing the SpaceX and Tesla accounts from Facebook, which he did.
As far as I can tell, to this day, there are no official, verified Tesla or SpaceX pages on Facebook.
Years later, after he had taken over Twitter, Elon even mocked Facebook for “caving” to the very boycott that he participated in himself.
Musk’s brazen hypocrisy on boycotts is just the latest example of his free speech double standard. He delights in wielding his immense power and influence to mock, criticize and yes, boycott those he disagrees with. But the moment anyone turns those same tactics against him, he cries foul and literally makes a federal case out of it.
This kind of self-serving double standard is corrosive to public discourse and the principles of free speech that Musk claims to hold so dear. While he and his supporters will almost certainly choose to ignore the stench of hypocrisy, the rest of us shouldn’t. Musk’s boycott hypocrisy deserves to be dragged out into the light again and again for everyone else to recognize.
Posted on Techdirt - 15 November 2024 @ 03:38pm
Posted on Techdirt - 15 November 2024 @ 11:04am
What Free Speech? Trump Ramps Up Threats To Sue Publishers Over Their Speech
We just warned folks that Donald Trump would be one of the most anti-free speech Presidents in history, and he seems to have no qualms living down to that reputation.
Donald Trump’s history of frivolous lawsuits against media outlets shows his disdain for free speech, and he shows no signs of stopping. The Columbia Journalism Review has an article exploring a bunch of other legal threats Trump and those around him have been flinging at news and book publishers over their speech.
These threats are part of a disturbing pattern of Trump trying to silence and intimidate his critics:
The letter, addressed to lawyers at the New York Times and Penguin Random House, arrived a week before the election. Attached was a discursive ten-page legal threat from an attorney for Donald Trump that demanded $10 billion in damages over “false and defamatory statements” contained in articles by Peter Baker, Michael S. Schmidt, Susanne Craig, and Russ Buettner.
It singles out two stories coauthored by Buettner and Craig that related to their book on Trump and his financial dealings, Lucky Loser: How Donald Trump Squandered His Father’s Fortune and Created the Illusion of Success, released on September 17. It also highlighted an October 20 story headlined “For Trump, a Lifetime of Scandals Heads Toward a Moment of Judgment” by Baker and an October 22 piece by Schmidt, “As Election Nears, Kelly Warns Trump Would Rule Like a Dictator.”
“There was a time, long ago, when the New York Times was considered the ‘newspaper of record,’” the letter, a copy of which was reviewed by CJR, reads. “Those halcyon days have passed.” It accuses the Times of being “a full-throated mouthpiece of the Democratic Party” that employs “industrial-scale libel against political opponents.”
Of course, none of this is new. Donald Trump has a long history of threatening and suing news organizations for their factual reporting. The point is not that many of these lawsuits eventually get tossed out of court. The real goal is to harass and punish media outlets for daring to criticize or investigate him.
Even when these lawsuits are eventually dismissed, the process is the punishment. The punishment is the process. News organizations are forced to divert time and money defending against frivolous claims, while journalists may think twice about pursuing tough stories out of fear of ending up in court. It’s an insidious form of soft censorship that undermines the media’s vital watchdog role.
This is especially galling given how frequently I saw people say that in the election they supported Donald Trump because “he stood for free speech” while simultaneously claiming that Kamala Harris “wanted censorship.” This was a key line that JD Vance used, without ever backing it up, because it wasn’t ever true.
Harris hasn’t sued the media for critical reporting. Trump has, over and over and over again and continues to threaten more such lawsuits.
Free speech actually means something, and the idea that Trump supports it is laughable. But, of course, his fans won’t care because they don’t actually care about free speech. That was just a convenient excuse. They’re happy to support speech suppression lawfare when they see it aimed at their perceived “enemies” in the media.
And all of this is why we need a federal anti-SLAPP law, but it seems quite unlikely Donald Trump will sign one while he’s the President.
Posted on Techdirt - 14 November 2024 @ 10:56am
You Don’t Believe In Free Markets And Free Speech If You’re Demanding Criminal Charges Against People For Their Free Market, Free Speech Decisions
Marc Andreessen, the influential venture capitalist, is exhibiting a startling disconnect between his stated beliefs in free markets and free speech, and his recent authoritarian threats against those who disagree with him.
In recent statements, Andreessen has threatened criminal charges against advertisers choosing not to associate with certain platforms and accused an imaginary “government-university-company censorship apparatus” of violating free speech rights. These authoritarian demands completely contradict the free market and free speech principles Andreessen claims to champion in his “techno-optimist manifesto.”
As a board member of Meta with inside knowledge of content moderation practices, Andreessen should know better. His descent into promoting baseless conspiracy theories and attacking the very rights he purports to defend is deeply troubling.
Over the last few years, Andreessen’s views on innovation have taken him down a path that often seems detached from reality. It started with him claiming that Elon Musk is “pro free speech,” when it was blatantly obvious that he was not even remotely supportive of free speech.
Things got worse last year when Andreessen published his bizarre “Techno-Optimist Manifesto,” which had plenty of good, but non-controversial, ideas in it, and then a few that made no sense, including claiming that “trust & safety” was an “enemy of progress.” I wrote a long response to it as my final post of last year, noting that avoiding breaking shit that doesn’t need to be broken (the role of “trust & safety”) isn’t holding back progress, it’s making sure that innovation and progress proceeds in a way that more people are willing to adopt rather than freak out about.
Earlier this year, Andreessen made a big bet on Donald Trump for President, and now he’s won that bet. He claimed he only supported Trump because he believed Trump was better for what he calls his “Little Tech Agenda.” Historically, I would have expected him not to leap to the gloating stage so quickly, but I was wrong. He’s spent a few days basically showing that his supposed “Little Tech Agenda” gets tossed out the window when he gets near the hands of power, to the point that he is looking to weaponize the criminal justice system to punish his perceived critics.
Andreessen seems to ignore that his plan to punish people completely obliterates what he claimed he believed in his “tech optimist manifesto.” So let’s go through a bit of it. In the manifesto, he writes:
We believe free markets are the most effective way to organize a technological economy. Willing buyer meets willing seller, a price is struck, both sides benefit from the exchange or it doesn’t happen. Profits are the incentive for producing supply that fulfills demand. Prices encode information about supply and demand. Markets cause entrepreneurs to seek out high prices as a signal of opportunity to create new wealth by driving those prices down.
I agree with that sentiment. But Andreessen’s recent actions contradict it. On ExTwitter, Andreessen wrote:
That’s him tweeting:
The orchestrated advertiser boycott against X and popular podcasts must end immediately. Conspiracy in restraint of trade is a prosecutable crime.
He’s wrong. To an extraordinary degree. Conspiracy in restraint of trade applies to collusive behavior to harm competitors. In 1982, the Supreme Court made clear that boycotts are a form of expression, protected by the First Amendment. The only exceptions are if those boycotts were done for “illegal aims.” And “sorry, we don’t want to advertise on your site” is not an “illegal aim.”
It’s especially galling since choosing not to advertise is clearly part of both the free market and the free speech right not to associate. The only cases where boycotts may be illegal is if they are in pursuit of something illegal, such as for antitrust purposes, like when Toys R Us used its (then!) dominant position to block toy makers from selling to Costco. But advertisers deciding “we don’t want our ads showing up on Elon Musk’s Hellsite” are making a business decision.
You know, like what free markets enable? Willing buyer. Willing seller. Except here, some of the buyers aren’t willing. And Marc is claiming that’s criminal.
His misunderstanding of free speech continued.
That’s him saying:
Everyone involved in the longstanding illegal joint government-university-company censorship apparatus should take care to preserve their files and communications. Sunlight is coming.
First of all, you don’t issue litigation holds by tweet. That’s not how any of that works. Second, there is no “joint government-university-company censorship apparatus.” That’s literally not a thing that exists. We’ve talked about this quite a bit here at Techdirt, and even the Supreme Court just recently pointed out (in a ruling written by Amy Coney Barrett) that there appears to be no evidence of such a thing existing (other than a bunch of made up nonsense by a bunch of grifters).
There were a bunch of university researchers studying the flow of disinformation, mostly around voter intimidation and the like. One government agency, CISA, did team up with some of those researchers to act as a clearing house for connecting election officials who might see potentially problematic voter information, such as false information about where, how, or when to vote. Through this effort sometimes that information would be flagged to companies to review against their own policies.
This was nothing controversial or problematic. Every company has their own policies on what they allow. Most companies don’t want to enable election interference, so they say “hey, maybe we shouldn’t allow information that tells people to vote on the wrong day, because maybe that violates our rules.”
As we’ve explained multiple times, even as these researchers flagged some content for the companies to review, the companies quite frequently did nothing in response and there were no threats or legal consequences as a result. Flagging is something anyone can do (still, to this day, if you find something that you think violates the rules on any social media platform, you can flag it, just like these researchers did).
Again, Stanford’s report on what happened stated that the social media companies kept up nearly every reported URL, and in the small number of cases when they took action, they mostly focused on adding more speech (which is a very “marketplace of ideas” concept) such as pointing out that mail-in ballots are, in fact, pretty damn safe and secure.
We find, overall, that platforms took action on 35% of URLs that we reported to them. 21% of URLs were labeled, 13% were removed, and 1% were soft blocked. No action was taken on 65%. TikTok had the highest action rate: actioning (in their case, their only action was removing) 64% of URLs that the EIP reported to their team.)
I need to repeat this because it seems to keep getting lost every time I write about this. Anyone can report things to social media companies. It’s the “report” button you see all over the place. These academic researchers did report stuff to the companies, and only 13% of that content was removed. And even that’s distorted, because TikTok removed 64% of URLs reported because TikTok doesn’t care. So, the reality is that the other companies (mainly Facebook, Instagram, and Twitter) removed less than 10% of what was flagged by folks. Some of them they “labeled” which is just “more speech” in the marketplace of ideas. And the rest they left alone.
And even if you claim that the 13% of removed links is too much, the details suggest you’re wrong about that as well. The report showed that the largest % of content that was removed after researchers reported it was related to phishing scams. In other words, people posted election-related content that was made to trick people into giving up their personal info, and it was reported to the companies and they removed it to protect users.
This is not a censorship scandal. This isn’t a “joint government-university-company censorship apparatus.” This is “local election officials were scared about scams and election interference, and wanted to be able to report it to companies to review, and some academics who were studying disinformation helped.”
Sounds a lot less problematic that way, right?
And here’s the thing: Marc Andreessen knows all this.
Because Marc Andreessen, who claims he does everything in support of his “Little Tech Agenda,” is on the board of Meta, one of the biggest “Big Tech” companies there is. And, over the last few days, I’ve spoken to way too many current and former executives at Meta (many of whom are frustrated), who all made it clear to me that in Marc’s role on the board he has been directly briefed on what is happening regarding disinfo/trust & safety efforts and why it’s not nefarious.
I don’t know if Marc ignored those briefings.
I don’t know if he forgot those briefings.
I don’t know if he doesn’t care that he’s misrepresenting reality.
But I do know that Meta execs are not particularly thrilled that he’s now spreading a nonsense conspiracy theory suggesting that the very company he is on the board of is somehow engaged in First Amendment violating state action, a thing that every court that has looked at this issue fully has rejected completely.
Because this is not a thing:
That’s Andreessen claiming that “every participant in the orchestrated government-university-nonprofit-company censorship machine of the last decade can be charged criminally under one or both of these federal laws.” The “federal laws” he’s talking about are 18 USC 241 and 18 USC 242: “Conspiracy against rights” and “Deprivation of rights.”
These are both laughable claims.
Your “rights” do not grant you the freedom to use someone else’s private property for your own purpose. Again, you would think that Mr. “we believe in free markets” and “willing buyer meets willing seller” and “I’m on the board of Meta” would at some point realize that part of the free market where willing buyer meets willing seller is that private property rights matter. If the private property owner doesn’t want you on their property, they can get you to leave.
Even if we’re just looking at this through the free speech lens, the right of free speech has to include the right of association, and that includes the right not to associate.
Marc must believe that too, because I’m pretty sure that if I showed up at one of Marc’s many mansions and started screaming on his lawn, he would have me forcibly removed. That would be his right as a private property owner. Is that “depriving me of my free speech rights”? Of course not, because Marc has no obligation to allow me to speak on his property.
The same is true of Meta, on whose board Marc sits. It has no obligation to enable anyone’s speech. Their property. Their rules. And yes, it’s true that the government can’t forcibly remove speech, but there’s no evidence that happened. Instead, you had some academics and non-profits who used their own free speech rights (which Marc seems to think don’t exist) to share their thoughts with the companies, sometimes highlighting content they thought broke the company’s rules, or sometimes advocating for different rules. Which is their free speech.
The only way speech “rights” can be deprived is via state action, which has to involve the government. And yes, Marc wants to keep arguing that the government is involved in this “government-university-nonprofit-company censorship machine” but as the Supreme Court noted just a few months ago, what is happening does not, in any way, appear to be state action to deprive people of their rights.
At worst, government actors were trying to persuade private actors to act differently, which is allowed. The problem only comes in when the government tries to force action through threats and coercion. Yet no one has turned up any evidence of that.
Unlike Marc, the Supreme Court appears to be adept at differentiating between cases involving potential government coercion. In the Murthy case, the majority opinion authored by Justice Barrett found no evidence of coercion against social media companies. And the plaintiffs in that case tried every angle they could and threw a ton of ideas against the wall. Conversely, in the Vullo case, heard on the same day, the Court unanimously agreed that a New York official’s demand for insurance companies to deny coverage to the NRA constituted coercion and violated the First Amendment.
In short, the Supreme Court knows when the government is depriving people of their free speech rights and didn’t see that (at all) in how social media companies do content moderation.
Again, I know that plenty of internet randos and highly motivated partisans have been misrepresenting this reality for a few years now. And it’s pointless to respond to them.
But, of all the people in the world, Marc Andreessen should know what’s actually going on. Multiple Meta execs told me that he’s been told about it. Yet he’s making a mockery of his own “manifesto” by supporting the literal criminalization of being a “non-willing buyer” in a marketplace where he has a stake. He undermines his own claims of supporting free speech by suggesting it’s criminal for private property owners to decide whose speech to associate with. He’s further contradicting his free speech stance by threatening criminal action against academics and non-profits who use their free speech to criticize companies where Marc Andreessen is either an equity holder (ExTwitter) or a board member (Meta).
And that’s not “techno optimism.” It’s certainly not a “little tech agenda.” If you can force companies to do the bidding of the biggest tech companies out there, while simultaneously creating criminal charges for merely saying “hey, does this violate your rules?” you’re creating a world in which startups will be loathe to do business, out of fear of what arbitrary nonsense the Marc Andreessen/Elon Musk/Donald Trumps of the world will impose on them.
At Techdirt, we frequently call out hypocrisy and inconsistencies from public figures. In many cases, these stem from ignorance or misunderstanding of the complex issues around technology and policy. What’s so troubling here is that Marc is not ignorant of what has happened. He has had these things explained to him. Yet he is misrepresenting them to a very large audience, riling them up to believe things that are simply not true. This goes beyond mere inconsistency — it’s a direct distortion of reality from someone who should, and likely does, know better.
The end result may be that he gets to punish his perceived enemies if the Trump administration is willing to take such marching orders, but it doesn’t change the fact that he is misrepresenting reality. In doing so, he’s not just violating his own stated principles, but undermining public understanding of critical issues around free speech and platform responsibility. For someone who claims to be a “techno-optimist,” that’s a deeply pessimistic and damaging approach.
Posted on Techdirt - 13 November 2024 @ 11:04am
Judge: Just Because AI Trains On Your Publication, Doesn’t Mean It Infringes On Your Copyright
I get that a lot of people don’t like the big AI companies and how they scrape the web. But these copyright lawsuits being filed against them are absolute garbage. And you want that to be the case, because if it goes the other way, it will do real damage to the open web by further entrenching the largest companies. If you don’t like the AI companies find another path, because copyright is not the answer.
So far, we’ve seen that these cases aren’t doing all that well, though many are still ongoing.
Last week, a judge tossed out one of the early ones against OpenAI, brought by Raw Story and Alternet.
Part of the problem is that these lawsuits assume, incorrectly, that these AI services really are, as some people falsely call them, “plagiarism machines.” The assumption is that they’re just copying everything and then handing out snippets of it.
But that’s not how it works. It is much more akin to reading all these works and then being able to make suggestions based on an understanding of how similar things kinda look, though from memory, not from having access to the originals.
Some of this case focused on whether or not OpenAI removed copyright management information (CMI) from the works that they were being trained on. This always felt like an extreme long shot, and the court finds Raw Story’s arguments wholly unconvincing in part because they don’t show any work that OpenAI distributed without their copyright management info.
For one thing, Plaintiffs are wrong that Section 1202 “grant[ s] the copyright owner the sole prerogative to decide how future iterations of the work may differ from the version the owner published.” Other provisions of the Copyright Act afford such protections, see 17 U.S.C. § 106, but not Section 1202. Section 1202 protects copyright owners from specified interferences with the integrity of a work’s CMI. In other words, Defendants may, absent permission, reproduce or even create derivatives of Plaintiffs’ works-without incurring liability under Section 1202-as long as Defendants keep Plaintiffs’ CMI intact. Indeed, the legislative history of the DMCA indicates that the Act’s purpose was not to guard against property-based injury. Rather, it was to “ensure the integrity of the electronic marketplace by preventing fraud and misinformation,” and to bring the United States into compliance with its obligations to do so under the World Intellectual Property Organization (WIPO) Copyright Treaty, art. 12(1) (“Obligations concerning Rights Management Information”) and WIPO Performances and Phonograms Treaty….
Moreover, I am not convinced that the mere removal of identifying information from a copyrighted work-absent dissemination-has any historical or common-law analogue.
Then there’s the bigger point, which is that the judge, Colleen McMahon, has a better understanding of how ChatGPT works than the plaintiffs and notes that just because ChatGPT was trained on pretty much the entire internet, that doesn’t mean it’s going to infringe on Raw Story’s copyright:
Plaintiffs allege that ChatGPT has been trained on “a scrape of most of the internet,” Compl. , 29, which includes massive amounts of information from innumerable sources on almost any given subject. Plaintiffs have nowhere alleged that the information in their articles is copyrighted, nor could they do so. When a user inputs a question into ChatGPT, ChatGPT synthesizes the relevant information in its repository into an answer. Given the quantity of information contained in the repository, the likelihood that ChatGPT would output plagiarized content from one of Plaintiffs’ articles seems remote.
Finally, the judge basically says, “Look, I get it, you’re upset that ChatGPT read your stuff, but you don’t have an actual legal claim here.”
Let us be clear about what is really at stake here. The alleged injury for which Plaintiffs truly seek redress is not the exclusion of CMI from Defendants’ training sets, but rather Defendants’ use of Plaintiffs’ articles to develop ChatGPT without compensation to Plaintiffs. See Compl. ~ 57 (“The OpenAI Defendants have acknowledged that use of copyright-protected works to train ChatGPT requires a license to that content, and in some instances, have entered licensing agreements with large copyright owners … They are also in licensing talks with other copyright owners in the news industry, but have offered no compensation to Plaintiffs.”). Whether or not that type of injury satisfies the injury-in-fact requirement, it is not the type of harm that has been “elevated” by Section 1202(b )(i) of the DMCA. See Spokeo, 578 U.S. at 341 (Congress may “elevate to the status of legally cognizable injuries, de facto injuries that were previously inadequate in law.”). Whether there is another statute or legal theory that does elevate this type of harm remains to be seen. But that question is not before the Court today.
While the judge dismisses the case with prejudice and says they can try again, it would appear that she is skeptical they could do so with any reasonable chance of success:
In the event of dismissal Plaintiffs seek leave to file an amended complaint. I cannot ascertain whether amendment would be futile without seeing a proposed amended pleading. I am skeptical about Plaintiffs’ ability to allege a cognizable injury but, at least as to injunctive relief, I am prepared to consider an amended pleading.
I totally get why publishers are annoyed and why they keep suing. But copyright is the wrong tool for the job. Hopefully, more courts will make this clear and we can get past all of these lawsuits.
Posted on Techdirt - 12 November 2024 @ 03:11pm
Lies, Panic, And Politics: The Targeted Takedown Of Backpage
The case against Backpage was built on lies, innuendo, and a willful misunderstanding of how the internet works. But that didn’t stop the government from destroying the company and its founders’ lives.
Over the last few years, we’ve written about how the entire case against Backpage was a travesty of justice. The company actually worked closely with the feds (and even received commendations) to stop any actual human trafficking on their platform, but refused to help the feds go after consenting adult sex work. After that happened, a bunch of government actors turned on the company and falsely painted it as knowingly helping sex trafficking.
A number of different criminal and civil cases were brought against the company and its owners, one of whom, Jim Larkin, died by suicide last year. Over and over again, politicians and the media painted Backpage as being a truly evil player in the space. However, the more you looked at the details, the more it seemed like they were convenient political scapegoats in a war against a free internet.
So much of our own coverage was building on the incredible work of Reason’s Elizabeth Nolan Brown. She recently completed a new 45-minute documentary about the railroading of Backpage and its founders, which you can watch on YouTube:
There’s also a big article that discusses much of what’s in the video as well (at the link above) which is also worth reading. Here’s just a snippet:
In 2004, Lacey and Larkin launched the website Backpage as an extension of the classified ads that had always run in the back of their newspapers (and most other newspapers). Backpage.com had all the sections you would find in its print counterparts, including apartments for rent, job openings, and personals and adult services ads, separated by city.
At first, nobody paid much attention to the site. When attorneys general declared war on adult services ads online in the late 2000s, the similar but better-known Craigslist was the platform in their crosshairs.
Though newspapers had for decades published ads for escorts, phone sex lines, and other forms of legal sex work, Craigslist’s online facilitation of these ads coincided with two burgeoning moral panics. The first concerned the rise of user-generated content—platforms such as Craigslist and early social media entities that allowed speech to be published without traditional gatekeepers.
The second panic: sex trafficking. A coalition of Christian activists and radical feminists had been teaming up to push the idea that levels of forced and underage prostitution were suddenly reaching epidemic proportions. To support this narrative, they tended to conflate all prostitution or even any sort of sex work with coerced sex trafficking.
I do appear a bit in the documentary, though most of it focuses (more importantly) on founder Michael Lacey and all he’s been through.
The whole video is worth watching. It shows how an entire government apparatus can be turned around to try to burn down a media operation based on misleading claims of “sex trafficking,” leaving people and a business completely ruined just on the bases of innuendo and rumors.
Learning from what happened to Backpage is that much more important, especially as we’re now entering a new era of attacks on free expression.
Posted on Techdirt - 12 November 2024 @ 09:21am
Judge To Zuckerman: Release Your App First, Then We’ll Talk Section 230
The first shot to use Section 230 to force adversarial interoperability on platforms has hit a setback.
Earlier this year, we wrote about an absolutely fascinating lawsuit that was an attempt to activate a mostly-ignored part of Section 230 in a really interesting way. Most people know about Section 230 for its immunity protections for hosting and content moderation of third party content. But Section (c)(2)(B) almost never warrants a mention. It says this:
No provider or user of an interactive computer service shall be held liable on account of any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1)
This part of the law almost never comes up in litigation, but Ethan Zuckerman, who has spent years trying to inspire a better internet (partly as penance for creating the pop-up ad), along with the Knight First Amendment Institute at Columbia, tried to argue that this section means that a platform, like Meta, can’t threaten legal retaliation against developers who are offering third party “middleware” apps that work on top of a platform to offer solutions that “restrict access to material” on a platform.
The underlying issue in the lawsuit was that Ethan wanted to release a plugin called “Unfollow Everything 2.0” based on an earlier plugin called “Unfollow Everything,” which allowed Facebook users to, well, unfollow everything. This earlier plugin was created by developer Louis Barclay, after he found it useful personally to just unfollow everyone on his Facebook account (not unfriend them, just unfollow them). Meta banned Barclay for life from the site, and also threatened legal action against him.
In the last few years, it’s unfortunately become common for the big platforms to legally threaten any service that tries to build tools to work on top of the service without first getting permission or signing some sort of agreement to access an API.
These legal threats have wiped out the ability to build tools for other platforms without permission. They’ve also very much gotten in the way of important “adversarial interoperability” tools and services that history has shown have been vital to innovation and competition.
So the argument from Zuckerman is that this little snippet from Section 230 says that he can’t face legal liability for his tool. Meta could still take technical actions to try to break or block his app, but they couldn’t threaten him with legal actions.
Meta’s response to all of this was that the court should reject Zuckerman’s case because the specifics of the app matter, and until he’s released the app, there’s no way to actually review this issue.
The Court should decline Plaintiff’s request to invoke this Court’s limited jurisdiction to issue an advisory opinion about a non-existent tool. Plaintiff’s claims—which are contingent on facts that cannot be known until after he has created and released Unfollow Everything 2.0 and Meta has had an opportunity to evaluate how the tool actually works—are not ripe for review under either Article III of the Constitution or the Declaratory Judgment Act, 28 U.S.C. § 2201.
It appears that the judge in the case, Judge Jacqueline Scott Corley, found that argument persuasive. After a hearing in court last Thursday, the judge dismissed the case, saying that Zuckerman could conceivably refile once the app is released. While a written opinion is apparently coming soon, this is based on what happened in the courtroom:
Judge Jacqueline Scott Corley of the U.S. District Court for the Northern District of California granted Meta’s request to dismiss the lawsuit on Thursday, according to court records. The judge said Mr. Zuckerman could refile the lawsuit at a later date.
This is perhaps not surprising, but it’s still not good. It’s pretty obvious what would happen if Zuckerman were to release his app because we already know what happened to Barclay, including the direct threats to sue him.
So, basically, the only way to move forward here is to put himself at great risk of facing a lawsuit from one of the largest companies in the world with a building full of lawyers. The chilling effects of this situation should be obvious.
I don’t know what happens next. I imagine Zuckerman can appeal to the Ninth Circuit, or he could actually try to release the app and see what happens.
But seeing as how the big platforms have spent over a decade abusing legal threats against companies that are just trying to help build products on top of those platforms, it would have been nice to have received a clean win that such “middleware” apps can’t be blocked through legal intimidation. Unfortunately, we’re not there yet.
Posted on Techdirt - 8 November 2024 @ 03:46pm
Ctrl-Alt-Speech: Presidents & Precedents
Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.
Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.
In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:
- Pennsylvania Becomes Hot Spot for Election Disinformation (NY Times)
- After Trump Took the Lead, Election Deniers Went Suddenly Silent (NY Times)
- X Is a White-Supremacist Site (The Atlantic)
- Papers, Please? The Republican Plan to Wall Off the Internet (Tech Policy Press)
- What Trump’s Victory Means for Internet Policy (CNET)
- The government plans to ban under-16s from social media platforms. Here’s what we know so far (ABC Australia)
- Canada orders shutdown of TikTok’s Canadian business, app access to continue (Reuters)
This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.
Posted on Techdirt - 8 November 2024 @ 11:15am
Fifth Circuit: You Have To Do A Ton Of Busywork To Show Texas’s Social Media Law Violates The First Amendment
If the government passes a law that infringes on the public’s free speech rights, how should one challenge the law?
As recent events have shown, the answer is more complex than many realized.
A few years ago, both Texas and Florida passed “social media content moderation” laws, which would both limit how social media platforms could engage in any kind of moderation, while simultaneously demanding they explain their editorial decision-making. The laws were then challenged as unconstitutional under the First Amendment.
While three out of the four lower courts (two district courts and one of the two appeals courts) that heard the challenges found it to be patently obvious that the laws were unconstitutional incursions on free speech, the Supreme Court took a different approach to the cases. The Supreme Court effectively punted on the issue, while giving some clues about how the First Amendment should apply.
Specifically, the Supreme Court sent the challenges of both laws back to the lower courts, saying that since both challenges — brought by the trade groups NetChoice and CCIA — were presented as “facial challenges,” it required a different analysis than any of the lower courts had engaged in.
A “facial challenge” is one where the plaintiffs are saying, “yo, this entire law is clearly unconstitutional.” An alternative approach would be an “as applied challenge,” in which case you effectively have to wait until one of the states tried to use the law against a social media platform. Then you can respond and say “see? this violates my rights and therefore is unconstitutional!”
The Supreme Court said that if something is a facial challenge, then the courts must first do a convoluted analysis of every possible way the law could be applied to see if there are some parts of applications of the law that might be constitutional.
That said, the Supreme Court’s majority reason still took the Fifth Circuit to task, highlighting how totally blinkered and disconnected from the clear meaning and historical precedents its analysis of the First Amendment was. Over and over again, the Supreme Court dinged Texas’ law as pretty obviously unconstitutional. Here’s just one snippet of many:
They cannot prohibit private actors from expressing certain views. When Texas uses that language, it is to say what private actors cannot do: They cannot decide for themselves what views to convey. The innocent-sounding phrase does not redeem the prohibited goal. The reason Texas is regulating the content moderation policies that the major platforms use for their feeds is to change the speech that will be displayed there. Texas does not like the way those platforms are selecting and moderating content, and wants them to create a different expressive product, communicating different values and priorities. But under the First Amendment, that is a preference Texas may not impose.
Indeed, the Supreme Court noted that it can already see that the Fifth Circuit is on the wrong track, even as it was sending the case back over the procedural issues required for a facial challenge:
But there has been enough litigation already to know that the Fifth Circuit, if it stayed the course, would get wrong at least one significant input into the facial analysis. The parties treated Facebook’s News Feed and YouTube’s homepage as the heartland applications of the Texas law. At least on the current record, the editorial judgments influencing the content of those feeds are, contrary to the Fifth Circuit’s view, protected expressive activity. And Texas may not interfere with those judgments simply because it would prefer a different mix of messages. How that matters for the requisite facial analysis is for the Fifth Circuit to decide. But it should conduct that analysis in keeping with two First Amendment precepts. First, presenting a curated and “edited compilation of [third party] speech” is itself protected speech. Hurley, 515 U. S., at 570. And second, a State “cannot advance some points of view by burdening the expression of others.” PG&E, 475 U. S., at 20. To give government that power is to enable it to control the expression of ideas, promoting those it favors and suppressing those it does not. And that is what the First Amendment protects all of us from.
But, either way, the case has gone back to the Fifth Circuit, and it is now sending the case back to the lower court, with the instructions that the trade groups are going to have to argue every single point as to why the law should be considered unconstitutional.
As the Supreme Court recognized, it is impossible to apply that standard here because “the record is underdeveloped.” Id. at 2399. Who is covered by Texas House Bill 20 (“H.B. 20”)? For these actors, which activities are covered by H.B. 20? For these covered activities, how do the covered actors moderate content? And how much does requiring each covered actor to explain its content-moderation decisions burden its expression? Because these are fact-intensive questions that must be answered by the district court in the first instance after thorough discovery, we remand.
So, basically, get ready for a ridiculously long and involved process for challenging the law and takes a swipe at the district court in the process.
A proper First Amendment facial challenge proceeds in two steps. The “first step” is to determine every hypothetical application of the challenged law. Id. at 2398 (majority opinion). The second step is “to decide which of the law[’s] applications violate the First Amendment, and to measure them against the rest.” Ibid. If the “law’s unconstitutional applications substantially outweigh its constitutional ones,” then and only then is the law facially unconstitutional. Id. at 2397. “[T]he record” in this case “is underdeveloped” on both fronts. See id. at 2399; see also id. at 2410–11 (Barrett, J., concurring) (noting the record failed to “thoroughly expose[] the relevant facts about particular social-media platforms and functions”); id. at 2411 (Jackson, J., concurring in part and concurring in the judgment) (noting plaintiffs failed to show “how the regulated activities actually function”); id. at 2412 (Thomas, J., concurring in the judgment) (noting plaintiffs “failed to provide many of the basic facts necessary to evaluate their challenges to H.B. 20”); id. at 2422 (Alito, J., concurring in the judgment) (noting the “incompleteness of this record”). That is a consequence of how this case was litigated in district court
There is plenty of busywork for all involved:
There is serious need of factual development at the second step of the analysis as well. To determine if any given application of H.B. 20’s “content-moderation provisions” is unconstitutional, the district court must determine “whether there is an intrusion on protected editorial discretion.” Id. at 2398 (citation omitted). That requires a detailed understanding of how each covered actor moderates content on each covered platform. See id. at 2437 (Alito, J., concurring in the judgment) (“Without more information about how regulated platforms moderate content, it is not possible to determine whether these laws lack a plainly legitimate sweep.” (quotation omitted)). Focusing primarily on Facebook’s News Feed or YouTube’s homepage will not suffice, as “[c]urating a feed and transmitting direct messages,” for example, likely “involve different levels of editorial choice, so that the one creates an expressive product and the other does not.” Id. at 2398 (majority opinion).
Moreover, one of the principal factual deficiencies in the current record, according to the Supreme Court, concerns the algorithms used by plaintiffs’ members. See, e.g., id. at 2404 n.5; id. at 2410–11 (Barrett, J., concurring); id. at 2424, 2427, 2436–38 (Alito, J., concurring in the judgment). It matters, for example, if an algorithm “respond[s] solely to how users act online,” or if the algorithm incorporates “a wealth of user-agnostic judgments” about the kinds of speech it wants to promote. Id. at 2404 n.5 (majority opinion); see also id. at 2410 (Barrett, J., concurring). And this is only one example of how the “precise technical nature of the computer files at issue” in each covered platform’s algorithm might change the constitutional analysis. ROA.539 (quotation omitted). It also bears emphasizing that the same covered actor might use a different algorithm (or use the same algorithm differently) on different covered services. For example, it might be true that X is a covered actor and that both its “For You” feed and its “Following” feed are covered services. But it might also be true that X moderates content differently or that its algorithms otherwise operate differently across those two feeds. That is why the district court must carefully consider how each covered actor moderates content on each covered service.
Separately, there’s the question about the transparency and explanatory parts of the law. Incredibly, the ruling says that the lower court has to explore whether or not being required to explain your editorial decisions is a First Amendment-violating burden:
When performing the second step of the analysis, the district court must separately consider H.B. 20’s individualized-explanation provisions. As the Supreme Court has instructed, that requires “asking, again as to each thing covered, whether the required disclosures unduly burden expression.” Moody, 144 S. Ct. at 2398 (majority opinion). The first issue to address here is the same one addressed above: whether each covered actor on each covered platform is even engaging in expressive activity at all when it makes content-moderation decisions. See id. at 2399 n.3 (explaining that these provisions “violate the First Amendment” only “if they unduly burden expressive activity” (emphasis added)). Then for each covered platform engaging in expressive activity, _the district court must assess how much the requirement to explain that platform’s content-moderation decisions burdens the actor’s expressio_n.
The one interesting tidbit here is the role that ExTwitter plays in all of this. Already, the company has shown that while it is grudgingly complying with the EU DSA’s requirements to report all moderation activity, it’s not doing so happily. Given the nature of the Fifth Circuit (and this panel of judges in particular), it would certainly be interesting to have Elon actually highlight how burdensome the law is on his platform.
Remember, the law at issue, HB 20, was passed under the (false) belief that “big social media companies” were unfairly moderating to silence conservatives. The entire point of the law was to force such companies to host conservative speech (including extremist, pro-Nazi speech). The “explanations” portion of the law was basically to force the companies to reveal any time they took actions against such speech so that people could complain.
But now that ExTwitter is controlled by a friend — though one who is frequently complaining about excessive government regulation — it would be quite interesting if he gets dragged into this lawsuit and participates by explaining just how problematic the law is in a way that even Judge Andrew Oldham (who seems happy to rule whichever way makes Donald Trump happiest) might even realize that the law is bad.
Either way, for now, as the case goes back to the district court, NetChoice and CCIA will have an awful lot of work to do, for two groups that are already incredibly overburdened in trying to protect the open internet.