ab 2273 – Techdirt (original) (raw)

CA Governor Newsom And AG Bonta Pretend Court Agreed With Them On Kids Code

from the you-don't-have-to-do-this dept

Dear California Governor Newsom and Attorney General Bonta: you really don’t have to be the opposite end of the extremists in Florida and Texas. You don’t have to lie to your constituents and pretend losses are wins. Really. Trust me.

You may recall that the Attorneys General of Texas and Florida have taken to lying to the public when they lose big court cases. There was the time when Texas AG Ken Paxton claimed “a win” when the Supreme Court did exactly the opposite of what he had asked it to do.

Or the time when Florida AG Ashley Moody declared victory after the Supreme Court made it quite clear that Florida’s social media law was unconstitutional, but sent it back to the lower court to review on procedural grounds.

And now it looks like Newsom and Bonta are doing the same sort of thing, claiming victory out of an obvious loss, just on the basis of some procedural clean-up (ironically, the identical procedural clean-up that Moody declared victory over).

As you’ll recall, we just wrote about the Ninth Circuit rejecting California’s Age Appropriate Design Code (AADC) as an obvious First Amendment violation (just as we had warned both Bonta and Newsom, only to be ignored). However, because of the results in the Supreme Court decision in Moody, the Ninth Circuit sent some parts of the law back to the lower court.

The details here are kind of important. In the Moody decision, the Supreme Court said for there to be a “facial challenge” against an entire law (i.e., a lawsuit saying “this whole law is unconstitutional, throw it out”), the lower courts have to consider every part of the law and whether or not every aspect and every possible application is unconstitutional. In the Texas and Florida cases, the Supreme Court noted that the lower courts really only reviewed parts of those laws and how they might impact a few companies, rather than really evaluating whether or not some of the laws were salvageable.

However, the ruling also made quite clear that any law that seeks to tell social media companies how to moderate is almost certainly a violation of the First Amendment.

In the challenge to the AADC, most of the case (partly at the request of the district court judge!) focused on the “Data Protection Impact Assessment” (DPIA) requirements of the law. This was the main part of the law, and the part that would require websites to justify every single feature they offer and explain how they will “mitigate” any potential risks to kids. The terrible way that this was drafted would almost certainly require websites to come up with plans to remove content the California government disapproved of, as both courts recognized.

But the AADC had a broader scope than the DPIA section.

So, the Ninth Circuit panel sent part of the law back to the lower court following the requirements in the Moody ruling. They said the lower court had to do the full facial challenge, exploring the entirety of the law and how it might be applied, rather than throwing out the whole law immediately.

However (and this is the important part), the Ninth Circuit said that on the DPIA point specifically, which is the crux of the law, there was enough briefing and analysis to show that it was obviously a violation of the First Amendment. It upheld the injunction barring that part of the law from going into effect.

That doesn’t mean the rest of the law is good or constitutional. It just means that now the lower court will need to examine the rest of the law and how it might be applied before potentially issuing another injunction.

In no way and in no world is this a “win” for California.

But you wouldn’t know that to hear Newsom or Bonta respond to the news. They put out a statement that suggests they either don’t know what they’re talking about or they’re hoping the public is too stupid to realize this. It’s very likely the latter, but it’s a terrible look for both Newsom and Bonta. It suggests they’re so deep in their own bullshit that they can’t be honest with the American public. They supported an unconstitutional bill that has now been found to be unconstitutional by both the district and the appeals court.

First up, Newsom:

“California enacted this nation-leading law to shield kids from predatory practices. Instead of adopting these commonsense protections, NetChoice chose to sue — yet today, the Court largely sided with us. It’s time for NetChoice to drop this reckless lawsuit and support safeguards that protect our kids’ safety and privacy.”

Except, dude, they did not “largely side” with you. They largely sided with NetChoice and said there’s not enough briefing on the rest. I mean, read the fucking ruling, Governor:

We agree with NetChoice that it is likely to succeed in showing that the CAADCA’s requirement that covered businesses opine on and mitigate the risk that children may be exposed to harmful or potentially harmful materials online, Cal. Civ. Code §§ 1798.99.31(a)(1)–(2), facially violates the First Amendment. We therefore affirm the district court’s decision to enjoin the enforcement of that requirement, id., and the other provisions that are not grammatically severable from it…

And, no, the law does not shield kids from predatory practices. That’s the whole point that both courts have explained to you: the law pressures websites to remove content, not change conduct.

So, why would NetChoice drop this lawsuit that it is winning? Especially when letting this law go into effect will not protect kids’ safety and privacy, and would actually likely harm both, by encouraging privacy-destroying age verification?

As for Bonta:

“We’re pleased that the Ninth Circuit reversed the majority of the district court’s injunction, which blocked California’s Age-Appropriate Design Code Act from going into effect. The California Department of Justice remains committed to protecting our kids’ privacy and safety from companies that seek to exploit their online experiences for profit.”

Yeah, again, it did not “reverse the majority.” It upheld the key part, and the only part that was really debated in the lower court. It sent the rest back to be briefed on, and it could still be thrown out once the judges see what nonsense you’ve been pushing.

It wasn’t entirely surprising when Paxton and Moody pulled this kind of shit. After all, the GOP has made it clear that they’re the party of “alternative facts.” But the Democrats don’t need to do the same at the other end of the spectrum. We’ve already seen that Newsom’s instincts are to copy the worst of the GOP, but in favor of policies he likes. This is unfortunate. We don’t need insufferable hacks running both major political parties.

Look, is it so crazy to just ask for our politicians to not fucking lie to their constituents? If they can’t be honest about basic shit like this, what else are they lying about? You lost a case because you supported a bad law. Suck it up and admit it.

Filed Under: 9th circuit, aadc, ab 2273, california, dpia, facial challenge, gavin newsom, lies, rob bonta
Companies: netchoice

Court Sees Through California’s ‘Protect The Children’ Ruse, Strikes Down Kids Code

from the gee,-who-could-have-predicted dept

Friday morning gave us a nice victory for free speech in the 9th Circuit, where the appeals court panel affirmed most of the district court’s ruling finding California’s “Age Appropriate Design Code” unconstitutional as it regulated speech.

There’s a fair bit of background here that’s worth going over, so bear with me. California’s Age Appropriate Design Code advanced through the California legislature somewhat quietly, with little opposition. Many of the bigger companies, like Meta and Google, were said to support it, mainly because they knew they could easily comply with their buildings full of lawyers, whereas smaller competitors would be screwed.

Indeed, for a period of time it felt like only Professor Eric Goldman and I were screaming about the problems of the law. The law was drafted in part by a British Baroness and Hollywood movie director who fell deep for the moral panic that the internet and mobile phones are obviously evil for kids. Despite the lack of actual evidence supporting this, she has been pushing for laws in the UK and America to suppress speech she finds harmful to kids.

In the US, some of us pointed out how this violates the First Amendment. I also pointed out that the law is literally impossible to comply with for smaller sites like Techdirt.

The Baroness and the California legislators (who seem oddly deferential to her) tried to get around the obvious First Amendment issues by insisting that the bill was about conduct and design and not about speech. But as we pointed out, that was obviously a smokescreen. The only way to truly comply with the law was to suppress speech that politicians might later deem harmful to children.

California Governor Gavin Newsom eagerly signed the bill into law, wanting to get some headlines about how he was “protecting the children.” When NetChoice challenged the law, Newsom sent them a very threatening letter, demanding they drop the lawsuit. Thankfully, they did not, and the court saw through the ruse and found the entire bill unconstitutional for the exact reasons we had warned the California government about.

The judge recognized that the bill required the removal of speech, despite California’s claim that it was about conduct and privacy. California (of course) appealed, and now we have the 9th Circuit which has mostly (though not entirely) agreed with the district court.

The real wildcard in all of this was the Supreme Court’s decision last month in what is now called the Moody case, which also involved NetChoice challenging Florida’s and Texas’ social media laws. The Supreme Court said that the cases should be litigated differently as a “facial challenge” rather than an “as-applied challenge” to the law. And it seems that decision is shaking up a bunch of these cases.

But here, the 9th Circuit interpreted it to mean that it could send part of the case back down to the lower court to do a more thorough analysis on some parts of the AADC that weren’t as clearly discussed or considered. In a “facial challenge,” the courts are supposed to consider all aspects of the law, and whether or not they all violate the Constitution, or if some of them are salvageable.

On the key point, though, the 9th Circuit panel rightly found that the AADC violates the First Amendment. Because no matter how much California claims that it’s about conduct, design, or privacy, everyone knows it’s really about regulating speech.

Specifically, they call out the DPIA requirement. This is a major portion of the law, which requires certain online businesses to create and file a “Data Protection Impact Assessment” with the California Attorney General. Part of that DPIA is that you have to explain how you plan to “mitigate the risk” that “potentially harmful content” will reach children (defined as anyone from age 0 to 18).

And we’d have to do that for every “feature” on the website. Do I think that a high school student might read Techdirt’s comments and come across something the AG finds harmful? I need to first explain our plans to “mitigate” that risk. That sure sounds like a push for censorship.

And the Court agrees this is a problem. First, it’s a problem because of the compelled speech part of it:

We agree with NetChoice that the DPIA report requirement, codified at §§ 1798.99.31(a)(1)–(2) of the California Civil Code, triggers review under the First Amendment. First, the DPIA report requirement clearly compels speech by requiring covered businesses to opine on potential harm to children. It is well-established that the First Amendment protects “the right to refrain from speaking at all.”

California argued that because the DPIA reports are not public, it’s not compelled speech, but the Court (rightly) says that’s… not a thing:

The State makes much of the fact that the DPIA reports are not public documents and retain their confidential and privileged status even after being disclosed to the State, but the State provides no authority to explain why that fact would render the First Amendment wholly inapplicable to the requirement that businesses create them in the first place. On the contrary, the Supreme Court has recognized the First Amendment may apply even when the compelled speech need only be disclosed to the government. See Ams. for Prosperity Found. v. Bonta, 594 U.S. 595, 616 (2021). Accordingly, the district court did not err in concluding that the DPIA report requirement triggers First Amendment scrutiny because it compels protected speech.

More importantly, though, the Court recognizes that the entire underlying purpose of the DPIA system is to encourage websites to remove First Amendment-protected content:

Second, the DPIA report requirement invites First Amendment scrutiny because it deputizes covered businesses into serving as censors for the State. The Supreme Court has previously applied First Amendment scrutiny to laws that deputize private actors into determining whether material is suitable for kids. See Interstate Cir., Inc. v. City of Dallas, 390 U.S. 676, 678, 684 (1968) (recognizing that a film exhibitor’s First Amendment rights were implicated by a law requiring it to inform the government whether films were “suitable” for children). Moreover, the Supreme Court recently affirmed “that laws curtailing [] editorial choices [by online platforms] must meet the First Amendment’s requirements.” Moody, 144 S. Ct. at 2393.

The state’s argument that this analysis is unrelated to the underlying content is easily dismissed:

At oral argument, the State suggested companies could analyze the risk that children would be exposed to harmful or potentially harmful material without opining on what material is potentially harmful to children. However, a business cannot assess the likelihood that a child will be exposed to harmful or potentially harmful materials on its platform without first determining what constitutes harmful or potentially harmful material. To take the State’s own example, data profiling may cause a student who conducts research for a school project about eating disorders to see additional content about eating disorders. Unless the business assesses whether that additional content is “harmful or potentially harmful” to children (and thus opines on what sort of eating disorder content is harmful), it cannot determine whether that additional content poses a “risk of material detriment to children” under the CAADCA. Nor can a business take steps to “mitigate” the risk that children will view harmful or potentially harmful content if it has not identified what content should be blocked.

Accordingly, the district court was correct to conclude that the CAADCA’s DPIA report requirement regulates the speech of covered businesses and thus triggers review under the First Amendment.

I’ll note that this is an issue that is coming up in lots of other laws as well. For example, KOSA has defenders who insist that it is only focused on design, and not content. But at the same time, it talks about preventing harms around eating disorders, which is fundamentally a content issue, not a design issue.

The Court says that the DPIA requirement triggers strict scrutiny. The district court ruling had looked at it under intermediate scrutiny (a lower bar), found that it didn’t pass that bar, and said even if strict scrutiny is appropriate, it wouldn’t pass since it couldn’t even meet the lower bar. The Appeals court basically says we can jump straight to strict scrutiny:

Accordingly, the court assumed for the purposes of the preliminary injunction “that only the lesser standard of intermediate scrutiny for commercial speech applies” because the outcome of the analysis would be the same under both intermediate commercial speech scrutiny and strict scrutiny. Id. at 947–48. While we understand the district court’s caution against prejudicing the merits of the case at the preliminary injunction stage, there is no question that strict scrutiny, as opposed to mere commercial speech scrutiny, governs our review of the DPIA report requirement.

And, of course, the DPIA requirement fails strict scrutiny in part because it’s obviously not the least speech restrictive means of accomplishing its goals:

The State could have easily employed less restrictive means to accomplish its protective goals, such as by (1) incentivizing companies to offer voluntary content filters or application blockers, (2) educating children and parents on the importance of using such tools, and (3) relying on existing criminal laws that prohibit related unlawful conduct.

In this section, the court also responds to the overhyped fears that finding the DPIAs unconstitutional here would mean that they are similarly unconstitutional in other laws, such as California’s privacy law. But the court says “um, guys, one of these is about speech, and one is not.”

Tellingly, iLit compares the CAADCA’s DPIA report requirement with a supposedly “similar DPIA requirement” found in the CCPA, and proceeds to argue that the district court’s striking down of the DPIA report requirement in the CAADCA necessarily threatens the same requirement in the CCPA. But a plain reading of the relevant provisions of both laws reveals that they are not the same; indeed, they are vastly different in kind.

Under the CCPA, businesses that buy, receive, sell, or share the personal information of 10,000,000 or more consumers in a calendar year are required to disclose various metrics, including but not limited to the number of requests to delete, to correct, and to know consumers’ personal information, as well as the number of requests from consumers to opt out of the sale and sharing of their information. 11 Cal. Code Regs. tit. 11, § 7102(a); see Cal Civ. Code § 1798.185(a)(15)(B) (requiring businesses to conduct regular risk assessments regarding how they process “sensitive personal information”). That obligation to collect, retain, and disclose purely factual information about the number of privacy-related requests is a far cry from the CAADCA’s vague and onerous requirement that covered businesses opine on whether their services risk “material detriment to children” with a particular focus on whether they may result in children witnessing harmful or potentially harmful content online. A DPIA report requirement that compels businesses to measure and disclose to the government certain types of risks potentially created by their services might not create a problem. The problem here is that the risk that businesses must measure and disclose to the government is the risk that children will be exposed to disfavored speech online.

Then, the 9th Circuit basically gives up on the other parts of the AADC. The court effectively says that since the briefing was so focused on the DPIA part of the law, and now (thanks to the Moody ruling) a facial challenge requires a full exploration of all aspects of the law, the rest should be sent back to the lower court:

As in Moody, the record needs further development to allow the district court to determine “the full range of activities the law[] cover[s].” Moody, 144 S. Ct. at 2397. But even for the remaining provision that is likely to trigger First Amendment scrutiny in every application because the plain language of the provision compels speech by covered businesses, see Cal. Civ. Code §§ 1798.99.31(a)(7), we cannot say, on this record, that a substantial majority of its applications are likely to fail First Amendment scrutiny.

For example, the Court notes that there’s a part of the law dealing with “dark patterns” but there’s not enough information to know whether or not that could impact speech or not (spoiler alert: it absolutely can and will).

Still, the main news here is this: the law is still not going into effect. The Court recognizes that the DPIA part of the law is pretty clearly an unconstitutional violation of the First Amendment (just as some of us warned Newsom and the California legislature).

Maybe California should pay attention next time (he says sarcastically as a bunch of new bad bills are about to make their way to Newsom’s desk).

Filed Under: 9th circuit, aadc, ab 2273, age appropriate design code, california, dpia, gavin newsom, protect the children, rob bonta
Companies: netchoice

The Copia Institute Tells The Ninth Circuit That The District Court Got It Basically Right Enjoining California’s Age Design Law

from the protecting-a-win dept

States keep trying to make the Internet a teenager-free zone. Which means that lawsuits keep needing to be filed because these laws are ridiculously unconstitutional. And courts are noticing: just this week a court enjoined the law in Ohio, and a different court had already enjoined the California AB 2273 AADC law a few months ago.

Unhappy at having its unconstitutional law put on ice California appealed the injunction to the Ninth Circuit, and this week the Copia Institute filed an amicus brief urging the appeals court to uphold it.

There’s a lot wrong with these bills, not the least of which how they offend kids’ own First Amendment rights. But in our brief we talked about how it also offended our own speech interests. Publishing on the web really shouldn’t be more involved than setting up a website and posting content, even if you want to do what Techdirt does and also support reader discussion in the comments. But this law sets up a number of obstacles that expressive entities like Techdirt would have to overcome before it could speak. If it didn’t it could potentially be liable if it spoke and teenagers were somehow potentially harmed by the exposure to the ideas (this is a mild paraphrase of the statutory text, but only barely – the law really is that dumb).

In particular, it would require the investment in technology – and dubious technology that hoovers up significant amounts of personal information – to make sure Techdirt knows exactly how old its readers are so that it can make sure to somehow quarantine the “harmful” ideas. But that sort of verification inherently requires identifying every reader, which is something that Techdirt currently doesn’t do and doesn’t want to do. Occasionally it’s necessary to do some light identification, like to process payments, but ordinarily readers can read, and even participate in the comments, without having to identify themselves because allowing them to participate anonymously is most consistent with Techdirt’s expressive interests. The Copia Institute has even filed amicus briefs in courts before, defending the right to speak (and read) anonymously. But this law would put an end to anonymity when it comes to Techdirt’s readership because it would force it to verify everyone’s age (after all, it’s not just teenagers this law would affect; the grown-ups who still could be readers would have to still show that they are).

So in this brief we talked about how the Copia Institute’s speech is burdened, which is a sign that the bill is unconstitutional. We also discussed with the courts how the focus of the constitutional inquiry needs to be on those burdens, not on whatever non-expressive pretext legislatures wrapped their awful bills up in. The California bill was ostensibly a “privacy” bill and the Ohio one focused on minors entering contracts, but those descriptions were really just for show. Where the rubber hit the road legislatively all these bills were really about the government trying to control what expression can appear online.

Which is why we also told the Ninth Circuit to not just uphold the injunction but even make it stronger by pointing out how strict scrutiny applied. The district court found that the law was unconstitutional by the lesser intermediate scrutiny standard, which in a way is good, because if the law can’t even clear that lower hurdle it’s a sign that it’s really, really bad. But we have the concern that the reason it applied the lesser standard was because the law targeted sites that make money, and that cannot be a reason that the First Amendment could ever be found to be less protective of free expression than it is supposed to be.

Filed Under: 1st amendment, 9th circuit, aadc, ab 2273, age verification, california

Co-Sponsor Of Unconstitutional AADC Law Completely Misrepresents Court’s Ruling, Showing His Lack Of Attention To Detail

from the running-your-mouth dept

Last week, we wrote about a federal district judge in California, Beth Labson Freeman, tossing out California’s Age Appropriate Design Code (AB 2273) as unconstitutional under the 1st Amendment. The ruling was careful and thorough, which did not surprise me, having sat through the oral arguments on the matter, where it seemed that the judge was asking all the right questions.

I fully expect California Attorney General Rob Bonta to appeal this ruling, but the state and legislators who supported the bill have been mostly quiet on the matter. However, late in the week, the Republican co-sponsor of the bill, Jordan Cunningham (who has since left the California Assembly), who introduced the bill with Democrat Buffy Wicks, gave some quotes to a newspaper that show he didn’t even read the decision, while insisting that he thinks it’ll win on appeal. He believes the appeal is strong mainly because he says the Court applied the strictest level of scrutiny (the highest bar):

When asked about the injunction, Cunningham told the Washington Examiner that he felt “profound disappointment” but believes the decision to be “quite vulnerable to appeal.” He said that the decision “overreached and applied the strictest level of scrutiny possible.

“We spent hundreds of hours of work on this bill shaping it, getting it through the legislature,” he said. “I mean, it passed the state assembly for 74 to zero.”

Cunningham said that the injunction blocks the AADC under the legal standard of First Amendment-related “strict scrutiny.” The standard requires the government to prove that the law is the “least restrictive means” to advance a “compelling” government interest related to speech. But the AADC doesn’t affect speech, Cunningham argued.

This is interesting for many reasons, not least of which is that the District Court did not apply strict scrutiny. Which anyone who read the case would know, because the judge literally said:

the Court will assume for the purposes of the present motion that only the lesser standard of intermediate scrutiny for commercial speech applies because, as shown below, the outcome of the analysis here is not affected by the Act’s evaluation under the lower standard of commercial speech scrutiny.

Literally, the key part of the ruling was that the court chose not to even figure out if intermediate or strict scrutiny should apply, because if the law couldn’t even pass intermediate scrutiny (and it could not!) it didn’t even matter whether or not strict scrutiny applied.

I don’t know about you, but I think if you’re the guy who co-sponsored the unconstitutional bill that just got thrown out as unconstitutional, and a reporter asks you about it, it would sorta make sense to (1) read the ruling, and (2) not make stuff about it. But, I guess that’s why I’m not in the California Assembly. I prefer to actually read the homework.

Either way, it’s bizarre and blatantly obviously wrong, but honestly par for the course for Cunningham to say that the bill was only found unconstitutional because the court used “the strictest level of scrutiny possible.”

Of course, it’s this level of attention to detail that got us into this mess in the first place. Even though plenty of people explained the constitutional infirmities in the law to many legislators, including directly to Cunningham himself, they all brushed it off and ignored the concerns, sure of their own correctness.

Cunningham is no longer in the Assembly, and this sort of mistake really doesn’t matter, but it really does show the extreme arrogance, combined with ignorance, that has the California legislature constantly doing this sort of thing. They just put their fingers in their ears and pretend that reality is whatever they want it to be.

Filed Under: 1st amendment, aadc, ab 2273, intermediate scrutiny, jordan cunningham, strict scrutiny

Court Says California’s Age Appropriate Design Code Is Unconstitutional (Just As We Warned)

from the the-1st-amendment-still-matters dept

Some good news! Federal Judge Beth Labson Freeman has recognized what some of us have been screaming about for over a year now: California’s Age Appropriate Design Code (AB 2273) is an unconstitutional mess that infringes on the 1st Amendment. We can add this to the pile of terrible moral panic “protect the children!” laws in Texas and Arkansas that have been similarly rejected (once again showing that the moral panic issue about the internet and children, combined with an ignorance of the 1st Amendment is neither a right nor a left issue — it’s both).

The Age Appropriate Design Code in California got almost no media attention while it was being debated or even after it passed. At times it felt like Professor Eric Goldman and myself were the only ones highlighting the problems with the bill. And there are many, many problems. Including problems that both Goldman and I told the court about (and both of us were cited in the decision).

For what it’s worth, I’ve heard through the grapevine, that one of the reasons why there was basically no media coverage was that many of the large tech companies are actually fine with the AADC, because they know that they already do most of what the law requires… and they also know full well that smaller companies will get slammed by the law’s requirements, so that’s kind of a bonus for the big tech companies.

As a reminder, the AADC was “sponsored” (in California outside organizations can sponsor bills) by an organization created and run by a British Baroness who is one of the loudest moral panic spreaders about “the kids on the internet.” Baroness Beeban Kidron has said that it’s her life’s mission to pass these kinds of laws around the world (she already helped get a similarly named bill passed in the UK, and is a driving force behind the dangerous Online Safety Act there as well). The other major sponsor of the AADC is… Common Sense Media, whose nonsense we just called out on another bill. Neither of them understand how the 1st Amendment works.

Thankfully, the judge DOES understand how the 1st Amendment works. As I noted a month and half after attending the oral arguments in person, the judge really seemed to get it. And that comes through in the opinion, which grants the preliminary injunctions, blocking the law from going into effect as likely unconstitutional under the 1st Amendment.

The judge notes, as was mentioned in the courtroom, that she’s “mindful” of the fact that the law was passed unanimously, but that doesn’t change the fact that it appears to violate the 1st Amendment. She says that protecting the privacy of people online is obviously a valid concern of the government, but that doesn’t mean you get to ignore the 1st Amendment in crafting a law to deal with it.

California insisted that nothing in the AADC regulated expression, only conduct. But, as the judge had called out at the hearing, it’s quite obvious that’s not true. And thus she finds that the law clearly regulates protected expression:

The State argues that the CAADCA’s regulation of “collection and use of children’s personal information” is akin to laws that courts have upheld as regulating economic activity, business practices, or other conduct without a significant expressive element. Opp’n 11– 12 (citations omitted). There are two problems with the State’s argument. First, none of the decisions cited by the State for this proposition involved laws that, like the CAADCA, restricted the collection and sharing of information. See id.; Rumsfeld v. Forum for Acad. & Inst. Rights, Inc., 547 U.S. 47, 66 (2006) (statute denying federal funding to educational institutions restricting military recruiting did not regulate “inherently expressive” conduct because expressive nature of act of preventing military recruitment necessitated explanatory speech); Roulette v. City of Seattle, 97 F.3d 300, 305 (9th Cir. 1996) (ordinance prohibiting sitting or lying on sidewalk did not regulate “forms of conduct integral to, or commonly associated with, expression”); Int’l Franchise, 803 F.3d at 397–98, 408 (minimum wage increase ordinance classifying franchisees as large employers “exhibit[ed] nothing that even the most vivid imagination might deem uniquely expressive”) (citation omitted); HomeAway.com, 918 F.3d at 680, 685 (ordinance regulating forms of short-term rentals was “plainly a housing and rental regulation” that “regulate[d] nonexpressive conduct—namely, booking transactions”); Am. Soc’y of Journalists & Authors, 15 F.4th at 961–62 (law governing classification of workers as employees or independent contractors “regulate[d] economic activity rather than speech”).

Second, in a decision evaluating a Vermont law restricting the sale, disclosure, and use of information about the prescribing practices of individual doctors—which pharmaceutical manufacturers used to better target their drug promotions to doctors—the Supreme Court held the law to be an unconstitutional regulation of speech, rather than conduct. Sorrell, 564 U.S. at 557, 562, 570–71. The Supreme Court noted that it had previously held the “creation and dissemination of information are speech within the meaning of the First Amendment,” 564 U.S. at 570 (citing Bartnicki v. Vopper, 532 U.S. 514, 527 (2001); Rubin v. Coors Brewing Co., 514 U.S. 476, 481 (1995); Dun & Bradstreet, Inc. v. Greenmoss Builders, Inc., 472 U.S. 749, 759 (1985) (plurality opinion)), and further held that even if the prescriber information at issue was a commodity, rather than speech, the law’s “content- and speaker-based restrictions on the availability and use of . . . identifying information” constituted a regulation of speech, id. at 570– 71; see also id. at 568 (“An individual’s right to speak is implicated when information he or she possesses is subject to ‘restraints on the way in which the information might be used’ or disseminated.”) (quoting Seattle Times Co. v. Rhinehart, 467 U.S. 20, 32 (1984)).

While California argued that Sorrell didn’t apply here because it was a different kind of information, the court notes that this argument makes no sense.

… the State is correct that Sorrell does not address any general right to collect data from individuals. In fact, the Supreme Court noted that the “capacity of technology to find and publish personal information . . . presents serious and unresolved issues with respect to personal privacy and the dignity it seeks to secure.” Sorrell, 564 U.S. at 579–80. But whether there is a general right to collect data is independent from the question of whether a law restricting the collection and sale of data regulates conduct or speech. Under Sorrell, the unequivocal answer to the latter question is that a law that—like the CAADCA—restricts the “availability and use” of information by some speakers but not others, and for some purposes but not others, is a regulation of protected expression.

And, thus, the court concludes that the restrictions in the AADC on collecting, selling, sharing, or retaining any personal information regulates speech (as a separate note, I’m curious what this also means for California’s privacy laws, on which the AADC is built… but we’ll leave that aside for now).

Separate from the restrictions on information collection, the AADC also has a bunch of mandates. Those also regulate speech:

The State contended at oral argument that the DPIA report requirement merely “requires businesses to consider how the product’s use design features, like nudging to keep a child engaged to extend the time the child is using the product” might harm children, and that the consideration of such features “has nothing to do with speech.” Tr. 19:14–20:5; see also id. at 23:5–6 (“[T]his is only assessing how your business models . . . might harm children.”). The Court is not persuaded by the State’s argument because “assessing how [a] business model[] . . . might harm children” facially requires a business to express its ideas and analysis about likely harm. It therefore appears to the Court that NetChoice is likely to succeed in its argument that the DPIA provisions, which require covered businesses to identify and disclose to the government potential risks to minors and to develop a timed plan to mitigate or eliminate the identified risks, regulate the distribution of speech and therefore trigger First Amendment scrutiny.

And she notes that the AADC pushes companies to create content moderation rules that favor the state’s moderation desires, which clearly is a 1st Amendment issue:

The CAADCA also requires a covered business to enforce its “published terms, policies, and community standards”—i.e., its content moderation policies. CAADCA § 31(a)(9). Although the State argues that the policy enforcement provision does not regulate speech because businesses are free to create their own policies, it appears to the Court that NetChoice’s position that the State has no right to enforce obligations that would essentially press private companies into service as government censors, thus violating the First Amendment by proxy, is better grounded in the relevant binding and persuasive precedent. See Mot. 11; Playboy Ent. Grp., 529 U.S. at 806 (finding statute requiring cable television operators providing channels with content deemed inappropriate for children to take measures to prevent children from viewing content was unconstitutional regulation of speech); NetChoice, LLC v. Att’y Gen., Fla. (“NetChoice v. Fla.”), 34 F.4th 1196, 1213 (11th Cir. 2022) (“When platforms choose to remove users or posts, deprioritize content in viewers’ feeds or search results, or sanction breaches of their community standards, they engage in First-Amendment-protected activity.”); Engdahl v. City of Kenosha, 317 F. Supp. 1133, 1135–36 (E.D. Wis. 1970) (holding ordinance restricting minors from viewing certain movies based on ratings provided by Motion Picture Association of America impermissibly regulated speech).

Then there’s the “age estimation” part of the bill. Similar to the cases in Arkansas and Texas around age verification, this court also recognizes the concerns, including that such a mandate will likely hinder adult access to content as well:

The State argues that “[r]equiring businesses to protect children’s privacy and data implicates neither protected speech nor expressive conduct,” and notes that the provisions “say[] nothing about content and do[] not require businesses to block any content for users of any age.” Opp’n 15. However, the materials before the Court indicate that the steps a business would need to take to sufficiently estimate the age of child users would likely prevent both children and adults from accessing certain content. See Amicus Curiae Br. of Prof. Eric Goldman (“Goldman Am. Br.”) 4–7 (explaining that age assurance methods create time delays and other barriers to entry that studies show cause users to navigate away from pages), ECF 34-1; Amicus Curiae Br. of New York Times Co. & Student Press Law Ctr. (“NYT Am. Br.”) 6 (stating age-based regulations would “almost certain[ly] [cause] news organizations and others [to] take steps to prevent those under the age of 18 from accessing online news content, features, or services”), ECF 56-1. The age estimation and privacy provisions thus appear likely to impede the “availability and use” of information and accordingly to regulate speech.

Again, the court admits that protecting kids is obviously a laudable goal, but you don’t do it by regulating speech. And the fact that California exempted non-profits from the law suggests targeting only some speakers, a big 1st Amendment no-no.

The Court is keenly aware of the myriad harms that may befall children on the internet, and it does not seek to undermine the government’s efforts to resolve internet-based “issues with respect to personal privacy and . . . dignity.” See Sorrell, 564 U.S. at 579; Def.’s Suppl. Br. 1 (“[T]he ‘serious and unresolved issues’ raised by increased data collection capacity due to technological advances remained largely unaddressed [in Sorrell].”). However, the Court is troubled by the CAADCA’s clear targeting of certain speakers—i.e., a segment of for-profit entities, but not governmental or non-profit entities—that the Act would prevent from collecting and using the information at issue. As the Supreme Court noted in Sorrell, the State’s arguments about the broad protections engendered by a challenged law are weakened by the law’s application to a narrow set of speakers. See Sorrell, 564 U.S. at 580 (“Privacy is a concept too integral to the person and a right too essential to freedom to allow its manipulation to support just those ideas the government prefers”).

Of course, once you establish that protected speech is being regulated, that’s not the end of the discussion. There are situations in which the government is allowed to regulate speech, but only if certain levels of scrutiny are met. During the oral arguments, a decent portion of the time was spent debating whether or not the AADC should have to pass strict scrutiny or just intermediate scrutiny. Strict scrutiny requires there to be both a compelling state interest in the law and that the law is narrowly tailored to achieve that result. Intermediate scrutiny says it must just be an “important government objective” (slightly less than compelling) and rather than “narrowly tailored” the law has to be substantially related to achieving that important government objective.

While I think it seemed clear that strict scrutiny should apply, here the court went with a form of intermediate scrutiny (“commercial scrutiny”) not necessarily because the judge thinks it’s the right level, but because if the law is unconstitutional even at intermediate scrutiny, then it wouldn’t survive strict scrutiny anyway. And thankfully, the AADC doesn’t even survive the lower level of scrutiny.

The court finds (as expected) that the state has a substantial interest in protecting children, but is not at all persuaded that the AADC does anything to further that interest, basically, because the law was terribly drafted. (They leave out that it had to be terribly drafted, because the intent of the bill was to pressure websites to moderate the way the state wanted, but they couldn’t come out and say that so they had to pretend that it was just about “data management.”):

Accepting the State’s statement of the harm it seeks to cure, the Court concludes that the State has not met its burden to demonstrate that the DPIA provisions in fact address the identified harm. For example, the Act does not require covered businesses to assess the potential harm of product designs—which Dr. Radesky asserts cause the harm at issue—but rather of “the risks of material detriment to children that arise from the data management practices of the business.” CAADCA § 31(a)(1)(B) (emphasis added). And more importantly, although the CAADCA requires businesses to “create a timed plan to mitigate or eliminate the risk before the online service, product, or feature is accessed by children,” id. § 31(a)(2), there is no actual requirement to adhere to such a plan. See generally id. § 31(a)(1)-(4); see also Tr. 26:9–10 (“As long as you write the plan, there is no way to be in violation.”),

Basically, California tried to tap dance around the issues, knowing it couldn’t come out and say that it was trying to regulate content moderation on websites, so it claims that it’s simply regulating “data management practices,” but the harms that the state’s own expert detailed (which drive the state’s substantial interest in passing the law) are all about the content on websites. So, then, by admitting that the law doesn’t directly require moderation (which would be clearly unconstitutional, but would address the harms described), the state effectively admitted that the AADC does not actually address the stated issue.

Because the DPIA report provisions do not require businesses to assess the potential harm of the design of digital products, services, and features, and also do not require actual mitigation of any identified risks, the State has not shown that these provisions will “in fact alleviate [the identified harms] to a material degree.” Id. The Court accordingly finds that NetChoice is likely to succeed in showing that the DPIA report provisions provide “only ineffective or remote support for the government’s purpose” and do not “directly advance” the government’s substantial interest in promoting a proactive approach to the design of digital products, services, and feature. Id. (citations omitted). NetChoice is therefore likely to succeed in showing that the DPIA report requirement does not satisfy commercial speech scrutiny.

So California got way to clever in writing the AADC and trying to wink wink nod nod its way around the 1st Amendment. By not coming out and saying the law requires moderation, it’s admitting that the law doesn’t actually address the problems it claims it’s addressing.

Ditto for the “age estimation” requirement. The issue here was that California tried to tap dance around the age estimation requirement by saying it wasn’t a requirement. It’s just that if you didn’t do age estimation, then you have to treat ALL users as if they’re children. Again, this attempt at being clever backfires by making it clear that the law would restrict access to content for adults:

Putting aside for the moment the issue of whether the government may shield children from such content—and the Court does not question that the content is in fact harmful—the Court here focuses on the logical conclusion that data and privacy protections intended to shield children from harmful content, if applied to adults, will also shield adults from that same content. That is, if a business chooses not to estimate age but instead to apply broad privacy and data protections to all consumers, it appears that the inevitable effect will be to impermissibly “reduce the adult population … to reading only what is fit for children.” Butler v. Michigan, 352 U.S. 380, 381, 383 (1957). And because such an effect would likely be, at the very least, a “substantially excessive” means of achieving greater data and privacy protections for children, see Hunt, 638 F.3d at 717 (citation omitted), NetChoice is likely to succeed in showing that the provision’s clause applying the same process to all users fails commercial speech scrutiny.

Similarly, regarding the requirement for higher levels of privacy protection, the court cites the NY TImes’ amicus brief, basically saying that this law will make many sites restrict content only to those over 18:

NetChoice has provided evidence that uncertainties as to the nature of the compliance required by the CAADCA is likely to cause at least some covered businesses to prohibit children from accessing their services and products altogether. See, e.g., NYT Am. Br. 5–6 (asserting CAADCA requirements that covered businesses consider various potential harms to children would make it “almost certain that news organizations and others will take steps to prevent those under the age of 18 from accessing online news content, features, or services”). Although the State need not show that the Act “employs . . . the least restrictive means” of advancing the substantial interest, the Court finds it likely, based on the evidence provided by NetChoice and the lack of clarity in the provision, that the provision here would serve to chill a “substantially excessive” amount of protected speech to the extent that content providers wish to reach children but choose not to in order to avoid running afoul of the CAADCA

Again and again, for each provision in the AADC, the court finds that the law can’t survive this intermediate level of scrutiny, as each part of the law seems designed to pretend to do one thing while really intending to do another, and therefore it is clearly not well targeted (nor can it be, since accurately targeting it would only make the 1st Amendment concerns more direct).

For example, take the provision that bars a website from using the personal info of a child in a way that is “materially detrimental to the physical health, mental health, or well-being of a child.” As we pointed out while the bill was being debated, this is ridiculously broad, and could conceivably cover information that a teenager finds upsetting. But that can’t be the law. And the court notes the lack of specificity here, especially given that children at different ages will react to content very differently:

The CAADCA does not define what uses of information may be considered “materially detrimental” to a child’s well-being, and it defines a “child” as a consumer under 18 years of age. See CAADCA § 30. Although there may be some uses of personal information that are objectively detrimental to children of any age, the CAADCA appears generally to contemplate a sliding scale of potential harms to children as they age. See, e.g., Def.’s Suppl. Br. 3, 4 (describing Act’s requirements for “age-appropriate” protections). But as the Third Circuit explained, requiring covered businesses to determine what is materially harmful to an “infant, a five-year old, or a person just shy of age seventeen” is not narrowly tailored.

So, again, by trying to be clever and not detailing the levels by which something can be deemed “age appropriate,” the “age appropriate design code,” fails the 1st Amendment test.

There is also an important discussion about some of the AADC requirements that would likely pressure sites to remove content that would be beneficial to “vulnerable” children:

NetChoice has provided evidence indicating that profiling and subsequent targeted content can be beneficial to minors, particularly those in vulnerable populations. For example, LGBTQ+ youth—especially those in more hostile environments who turn to the internet for community and information—may have a more difficult time finding resources regarding their personal health, gender identity, and sexual orientation. See Amicus Curiae Br. of Chamber of Progress, IP Justice, & LGBT Tech Inst. (“LGBT Tech Am. Br.”), ECF 42-1, at 12–13. Pregnant teenagers are another group of children who may benefit greatly from access to reproductive health information. Id. at 14–15. Even aside from these more vulnerable groups, the internet may provide children— like any other consumer—with information that may lead to fulfilling new interests that the consumer may not have otherwise thought to search out. The provision at issue appears likely to discard these beneficial aspects of targeted information along with harmful content such as smoking, gambling, alcohol, or extreme weight loss.

The court points out the sheer inanity of California’s defense on this point, which suggests that there’s some magical way to know how to leave available just the beneficial stuff:

The State argues that the provision is narrowly tailored to “prohibit[] profiling by default when done solely for the benefit of businesses, but allows it . . . when in the best interest of children.” Def.’s Suppl. Br. 6. But as amici point out, what is “in the best interest of children” is not an objective standard but rather a contentious topic of political debate. See LGBT Tech Am. Br. 11–14. The State further argues that children can still access any content online, such as by “actively telling a business what they want to see in a recommendations profile – e.g., nature, dance videos, LGBTQ+ supportive content, body positivity content, racial justice content, etc.” Radesky Decl. ¶ 89(b). By making this assertion, the State acknowledges that there are wanted or beneficial profile interests, but that the Act, rather than prohibiting only certain targeted information deemed harmful (which would also face First Amendment concerns), seeks to prohibit likely beneficial profiling as well. NetChoice’s evidence, which indicates that the provision would likely prevent the dissemination of a broad array of content beyond that which is targeted by the statute, defeats the State’s showing on tailoring, and the Court accordingly finds that State has not met its burden of establishing that the profiling provision directly advances the State’s interest in protecting children’s well-being. NetChoice is therefore likely to succeed in showing that the provision does not satisfy commercial speech scrutiny

This same issue comes up in the prohibition on “dark patterns,” which are not explained clearly and again run into the issue of how a site is supposed to magically know what is “materially detrimental.”

The last of the three prohibitions of CAADCA § 31(b)(7) concerns the use of dark patterns to “take any action that the business knows, or has reason to know, is materially detrimental” to a child’s well-being. The State here argues that dark patterns cause harm to children’s well-being, such as when a child recovering from an eating disorder “must both contend with dark patterns that make it difficult to unsubscribe from such content and attempt to reconfigure their data settings in the hope of preventing unsolicited content of the same nature.” Def.’s Suppl. Br. 7; see also Amicus Curiae Br. of Fairplay & Public Health Advocacy Inst. (“Fairplay Am. Br.”) 4 (noting that CAADCA “seeks to shift the paradigm for protecting children online,” including by “ensuring that children are protected from manipulative design (dark patterns), adult content, or other potentially harmful design features.”) (citation omitted), ECF 53-1. The Court is troubled by the “has reason to know” language in the Act, given the lack of objective standard regarding what content is materially detrimental to a child’s well-being. See supra, at Part III(A)(1)(a)(iv)(7). And some content that might be considered harmful to one child may be neutral at worst to another. NetChoice has provided evidence that in the face of such uncertainties about the statute’s requirements, the statute may cause covered businesses to deny children access to their platforms or content. See NYT Am. Br. 5–6. Given the other infirmities of the provision, the Court declines to wordsmith it and excise various clauses, and accordingly finds that NetChoice is likely to succeed in showing that the provision as a whole fails commercial speech scrutiny.

Given the 1st Amendment problems with the law, the court doesn’t even bother with the argument about the Dormant Commerce Clause being violated by the AADC, saying it doesn’t need to go there, and also highlighting that it’s a “thorny constitutional issue” that is in flux due to a very recent Supreme Court decision. While the judge doesn’t go into much detail on the argument that existing federal laws COPPA and Section 230 preempt California’s laws, she does say she doesn’t think that argument alone would be strong enough to get a preliminary injunction, saying the question of preemption would depend on what policies were impacted (basically saying that it might be preempted but we can’t tell until someone tries to enforce the law).

I fully expect the state to appeal and the issue will go up to the 9th Circuit. Hopefully they see the problems as clearly as the judge here did.

Filed Under: 1st amendment, aadc, ab 2273, age appropriate design code, age estimation, age verification, beeban kidron, beth labson freeman, california, for the children
Companies: netchoice

California’s SB 680: Social Media ‘Addiction’ Bill Heading For A First Amendment Collision

from the the-1st-amendment-still-applies-in-california dept

Similar to the “Age Appropriate Design Code” (AADC) legislation that became law last year, California’s latest effort to regulate online speech comes in the form of SB 680, a bill by Sen. Nancy Skinner targeting the designs, algorithms, and features of online services that host user-created content, with a specific focus on preventing harm or addiction risks to children.

SB 680 prohibits social media platforms from using a design, algorithm, or feature that causes a child user, 16 years or younger, to inflict harm on themselves or others, develop an eating disorder, or experience addiction to the social media platform. Proponents of SB 680 claim that the bill does not seek to restrict speech but rather addresses the conduct of the Internet services within its scope.

However, as Federal Judge Beth Labson Freeman pointed out during a recent court hearing challenging last year’s age-appropriate design law, if content analysis is required to determine the applicability of certain restrictions, it becomes content-based regulation. SB 680 faces a similar problem.

Designs, Algorithms, and Features are Protected Expression

To address the formidable obstacle presented by the First Amendment, policymakers often resort to “content neutrality” arguments to support their policing of expression. California’s stance in favor of AADC hinges on the very premise that AADC regulates conduct over content. Sen. Skinner asserted the same about SB 680, emphasizing that the bill is solely focused on conduct and not content.

“We used our best legal minds available […] to craft this in a way that did not run afoul of those other either constitutional or other legal jurisdictional areas. [T]hat is why [SB 680] is around the design features and the algorithms and such.”

However, the Courts have consistently held differently, and precedent reveals that these bills are inextricably intertwined with content despite such claims.

The Supreme Court has long held that private entities such as bookstores (_Bantam Books, Inc. v. Sullivan (_1963)), cable companies (Manhattan Community Access Corporation v. Halleck (2019)), newspapers (Miami Herald Publishing Co. v. Tornillo (1974)), video game distributors (Brown v. Entertainment Merchants Association (2011)), parade organizers (Hurley v. Irish-American Gay, Lesbian and Bisexual Group of Boston (1995)), pharmaceutical companies (Sorrell v. IMS Health, Inc. (2011)), and even gas & electric companies (Pacific Gas and Electric Co. v. Public Utilities Commission (1986)) have a First Amendment right to choose how they curate, display, and deliver preferred messages. This principle extends to online publishers as well, as the Court affirmed in Reno v. ACLUin 1997, emphasizing the First Amendment protection for online expression.

Moreover, courts have explicitly recognized that algorithms themselves constitute speech and thus deserve full protection under the First Amendment. In cases like Search King, Inc. v. Google Technology, Inc. and Sorrell, the courts held that search engine results and data processing are expressive activities, and algorithms used to generate them are entitled to constitutional safeguards.

In a more recent case, NetChoice v. Moody (2022), the U.S. Court of Appeals for the Eleventh Circuit declared certain provisions of Florida’s social media anti-bias law as unconstitutional, affirming that social media services’ editorial decisions — even via algorithm — constitute expressive activity.

Further, The Supreme Court’s stance in Twitter, Inc. v. Taamneh (2023) supports the idea that algorithms are merely one aspect of an overall publication infrastructure, warranting protection under the First Amendment.

This precedent underscores a general reluctance of the courts to differentiate between the methods of publication and the underlying messages conveyed. In essence, the courts have consistently acknowledged that the medium of publication is intricately linked to its content. Laws like SB 680 and the AADC are unlikely to persuade the courts to draw any lines.

SB 680’s Not-So-Safe Harbor Provision is Prior Restraint

Sen. Skinner also suggested at a legislative hearing that SB 680 is not overly burdensome for tech companies due to the inclusion of a “safe harbor” provision. This provision offers protection to companies conducting quarterly audits of their designs, algorithms, and features that may potentially harm users under 16. Companies that “correct” any problematic practices within 60 days of the audit are granted the safe harbor.

However, the safe harbor provision is yet another violation of the First Amendment. In practice, this provision acts as a prior restraint, compelling tech companies to avoid publication decisions that could be seen as violations for users under 16. The requirement to “correct” practices before publication restricts their freedom to operate.

Recall that the AADC also includes a similar requirement for mandatory data privacy impact assessments (DPIAs). Although the State of California defended this provision by arguing that it doesn’t mandate companies to alter the content they host, Judge Freeman disagreed, noting that the DPIA provision in the AADC forces social media services to create a “timed-plan” to “mitigate” their editorial practices.

In reality, both the “safe harbor” provisions of the AADC and SB 680 lead to services refraining from implementing certain designs, algorithms, or features that could potentially pose risks to individuals under 16. This cautious approach even extends to features that may enhance the online environment for parents and children, such as kid-friendly alternatives to products and services offered to the general public.

The online world, like the offline world, carries inherent risks, and services continually strive to assume and mitigate those risks. However, laws like the AADC and SB 680 make it too risky for services to make meaningful efforts in creating a safer online environment, ultimately hindering progress towards a safer web.

SB 680 is a Solution in Search of a Lawsuit

In a manner akin to newspapers making decisions about the content they display above the fold, letters to the editor they choose to publish, or the stories and speakers they feature, social media services also make choices regarding the dissemination of user-created content. While newspapers rely on human editors to diligently apply their editorial guidelines, social media companies use algorithms to achieve a similar objective.

However, it is puzzling that newspapers rarely face the kind of political scrutiny experienced by their online counterparts today. The idea of the government telling the New York Times how to arrange their stories in print editions seems inconceivable. But for some reason, we don’t react with similar concern when the government attempts to dictate how websites should display user content.

Despite an abundance of legal precedents upholding First Amendment protections for the publication tools that enable the delivery of protected expression, California lawmakers persist with SB 680. The federal courts’ skepticism toward the AADC law should be a warning light: If SB 680 becomes law this Fall, California will once again find itself embroiled in an expensive legal battle over online expression.

Jess Miers is Legal Advocacy Counsel at Chamber of Progress. This article was originally published on Medium and republished here with permission.

Filed Under: 1st amendment, aadc, ab 2273, addiction, nancy skinner, prior restraint, sb 680, social media

Judge Seems Skeptical That California’s Age Appropriate Design Code Is Compatible With The 1st Amendment

from the fingers-crossed dept

We’ve talked a few times about California’s “Age Appropriate Design Code.” This is a bill in California that was “sponsored” and pushed for by a UK Baroness (who is also a Hollywood filmmaker and has fallen for moral panic myths about evil technology). As we explained there is no way for a site like Techdirt to comply with the law. The law is vague and has impossible standards.

While the law says it does not require age verification, it does in effect. It says you have to reasonably “estimate” the age of visitors to a website (something we have zero ability to do, and I have no desire to collect such info), and then do an analysis of every feature on our website to see how it might cause harm to children, as well as put together a plan to “mitigate” such harm. If a site refuses to do “age estimation” (i.e., verification) then it must implement policies that apply to every visitor that tries to mitigate harm to minors.

As professor Eric Goldman noted, this bill is a radical experiment on children, from a legislature that claims it’s trying to stop radical experiments on children. As I discussed earlier this year, I submitted a declaration in the lawsuit to invalidate the law, filed by the trade group NetChoice, explaining how the law is a direct attack on Techdirt’s expression.

This past Thursday afternoon, I went to the courthouse in San Jose to watch the oral arguments regarding NetChoice’s motion for a preliminary injunction. I was pretty nervous as to how it would go, because even well-meaning people sometimes put up blinders when people start talking about “protecting the children,” never stopping to look closely at the details.

I came out of the hearing very cautiously optimistic. Now, I always say that you should never read too much into the types of questions a judge asks during oral arguments, but Judge Beth Labson Freeman (as I’ve seen her do in other cases as well) kicked off the hearing by being quite upfront with everyone, telling them where her mind was after reading all the filings in the case. In effect, she said the key issue in her mind, was whether or not the AADC actually regulates speech. If it does, then it’s probably an unconstitutional infringement of the 1st Amendment. If it does not, then it’s probably allowed. She specifically said, if she determines that the law regulates speech, then the law is clearly not content neutral, which it would need to be to survive strict scrutiny under the 1st Amendment.

So she asked the attorneys to focus on that aspect, though said there would be time to cover some of the other arguments as well.

She also noted that, of course, keeping children safe online was a laudable goal, and she was sure that everyone supported that goal. And she noted that the fact that the law was passed unanimously “weighed heavily” on her thinking. However, at the end of the day, her role is not to determine if the law is a good law, but just if it’s constitutional.

While California argued that the law doesn’t impact speech, and only “data management,” the judge seemed skeptical. She pushed back multiple times on California Deputy Attorney General Elizabeth Watson, who handled the arguments for the state. For NetChoice, Ambika Kumar pointed out how nearly every part of the law focused on content, and even that the declarations the state offered up from “experts,” as well as statements made by state officials about the law, all focused on the problems of “harmful content.”

The state kept trying to insist that the law only applied to the “design” of a website, not the content, but the judge seemed skeptical that you could raw that line. At one point, she noted that the “design” of the NY Times includes the content of the articles.

California tried to argue that the requirements to do a Data Protection Impact Assessment (DPIA) for every feature was both simple and that since there was no real enforcement mechanism, you couldn’t get punished over having every DPIA just say “there’s no impact.” They also claimed that while it does require a “timed plan” to “mitigate or eliminate” any risk, that again, it was up to the sites to determine what that is.

This left Judge Freeman somewhat incredulous, saying that basically the state of California was telling every company to fill out every DPIA saying that there was no risk to anything they did, and if they did see any risk to create a plan that says “we’ll solve this in 50 years” since that is a “timed plan.” She questioned why California would say such a thing. She highlighted that this seemed to suggest the law was too vague, which would be a 1st Amendment issue.

The judge also clearly called out that the law suggests kids accessing harmful content should be prevented, and questioned how this wasn’t a speech regulation. At one point she highlighted that, as a parent, what if you want your kids to read stories in the NY Times that might upset a child, shouldn’t that be up to the parents, not the state?

The state also kept trying to argue that websites “have no right” to collect data, and the judge pointed out that they cite no authority for that. The discussion turned, repeatedly, to the Supreme Court’s ruling in Sorrell v. IMS Health regarding 1st Amendment rights of companies to sell private data regarding pharmaceuticals for marketing. The judge repeatedly seemed to suggest that Sorrell strongly supported NetChoice’s argument, while California struggled to argue that case was different.

At one point, in trying to distinguish Sorrell from this law, California argued that Sorrell was about data about adults, and this bill is about kids (a “won’t you just think of the children” kind of argument) and the judge wasn’t buying it. She pointed out that we already have a federal law in COPPA that gives parents tools to help protect their children. The state started to talk about how hard it was for parents to do so, and the judge snapped back, asking if there was a 1st Amendment exception for when things are difficult for parents.

Later, California tried again to say that NetChoice has to show why companies have a right to data, and the judge literally pointed out that’s not how the 1st Amendment works, saying that we don’t “start with prohibition” and then make entities prove they have a right to speak.

Another strong moment was when the judge quizzed the state regarding the age verification stuff. California tried to argue that companies already collect age data (note: we don’t!) and all the law required them to do was to use that data they already collected to treat users they think are kids differently. But, the judge pushed back and noted that the law effectively says you have to limit access to younger users. California said that businesses could decide for themselves, and the judge jumped in to say that the statute says that companies must set defaults to the level most protective of children, saying: “So, we can only watch Bugs Bunny? Because that’s how I see it,” suggesting that the law would require the Disneyfication of the web.

There was also a fair bit of discussion about a provision in the law requiring companies to accurately enforce their terms of service. NetChoice pointed out that this was also a 1st Amendment issue, because if a site put in their terms that it does not allow speech that is “in poor taste,” and the Attorney General enforces the law, saying that the site did not enforce that part of its terms, then that means the state is determining what is, and what is not, in poor taste, which is a 1st Amendment issue. California retorted that there needs to be someway to deal with a site saying that it won’t collect data, but then it does. And the judge pointed out that, in that case there might be a breach of contract claim, or that the AG already has the power to go after companies using California’s Unfair Competition Law that bars deceptive advertising (raising the question of why they need this broad and vague law).

There were some other parts of the discussion, regarding if the judge could break apart the law, leaving some of it in place and dumping other parts. There was a fair amount of discussion about the scrutiny to apply if the judge finds that the law regulates speech, and how the law would play out under such scrutiny (though, again, the judge suggested a few times, that the law was unlikely to survive either intermediate scrutiny).

There was also some talk about the Dormant Commerce Clause, which the Supreme Court just somewhat limited. There, NetChoice brought up that the law could create real problems, since it applies to “California residents,” and that’s true even if they’re out of state. That means, the law could conflict with another law where a California resident was visiting. Or create a situations where companies would need to know a user was a California resident even if out of state.

The state tried to brush this aside, saying it was such a kind of edge case, and suggested it was silly to think that the Attorney General would try to enforce such a case. This did not impress the judge who noted she can’t consider the likelihood of enforcement in reviewing a challenge to the constitutionality of the law. She has to assume that the reason the state is passing a law is that it will enforce every violation of the law.

Section 230 was mostly not mentioned, as the judge noted that it seemed too early for such a discussion, especially if the 1st Amendment handled the issue. She did note that 230 issues might come up if she allowed the law to go into effect and the state then brought actions against companies, they might be able to use 230 to get such actions dismissed.

Also, there was a point where, when exploring the “speech v. data” question, NetChoice (correctly) pointed out that social media companies publish a ton of user content, and the judge (incorrectly) said “but under 230 it says they’re not publishers,” leading Kumar to politely correct the judge, that it says you can’t treat the company as a publisher, but that doesn’t mean it’s not a publisher.

At another point, the judge also questioned how effective such a law could be (as part of the strict scrutiny analysis), noting that there was no way kids were going to stop using social media, even if the state tried to ban it entirely.

As I said up top, I came out of the hearing cautiously optimistic. The judge seemed focused on the 1st Amendment issues, and spent the (fairly long!) hearing digging in on each individual point that would impact that 1st Amendment analysis (many of which I didn’t cover here, as this is already long enough…).

The judge did note she doesn’t expect her ruling to come out that quickly, and seemed relieved that the law doesn’t go into effect until next summer, but NetChoice (rightly!) pointed out that the effects of the law are already being felt, as companies need to prepare for the law to go into effect, which seemed to take the judge by surprise.

There’s also a bit more supplemental briefing that the judge requested, which will take place next month. So… guessing it’ll be a while until we hear how the judge decides (at which point it will be appealed to the 9th Circuit anyway).

Filed Under: 1st amendment, aadc, ab 2273, age estimation, age verification, california, commerce clause, dpia, free speech, prior restraint, protect the children, section 230
Companies: netchoice

Governor Newsom Desperately Begs NetChoice To Drop Its Lawsuit Over Unconstitutional AADC Bill

from the this-is-embarrassing dept

We’ve written a lot about AB 2273, California’s Age Appropriate Design Code (AADC) that requires websites with users in California to try to determine the ages of all their visitors, write up dozens of reports on potential harms, and then seek to mitigate those harms. I’ve written about why it’s literally impossible to comply with the law. We’ve had posts on how it conflicts with privacy laws and how it’s a radical experimentation on children (ironically, the drafters of the bill insist that they’re trying to stop experimentation on children).

We’ve also written about how NetChoice, an internet company trade group, has sued to block the law as unconstitutional, and how I filed a declaration explaining how the law would violate the rights of both us at Techdirt and our users.

That lawsuit has continued to move forward, with California filing a pretty laughable reply saying that it doesn’t regulate speech at all. NetChoice has filed its own reply as well, highlighting how ridiculous that is:

The State claims that AB 2273 regulates data management—“nonexpressive conduct,” Opp. 11—not speech. Nonsense. AB 2273’s text expressly requires services to “mitigate or eliminate” risks that a child “could” encounter “potentially harmful … content” online. Content was the through-line in the legislative process: Defendant Attorney General Bonta praised the Act precisely because it would “protect children from … harmful material” and “dangerous online content”—in other words, speech—and Governor Newsom lauded the law for “protect[ing] kids” from harmful “content.” The State’s own expert, who mentions “content” in her declaration 71 times, derides preexisting laws specifically because they “only” cover data management, not content. Radesky Decl. ¶ 98. The State cannot evade the Constitution by pretending the Act regulates only “business practices … related to the collection and use of children’s personal information,” Opp. 11, when the law’s text, purpose, and effect are to regulate and shape online content. Like California’s last attempt to “restrict the ideas to which children may be exposed,” Brown v. Ent. Merchs. Ass’n, 564 U.S. 786, 792, 794 (2011), AB 2273 violates the First Amendment

It appears that Governor Newsom may have realized how badly this case is going to go for him. Days after NetChoice filed that reply, Newsom sent NetChoice an angry letter demanding that it drop the case.

The text is quite remarkable… and bizarre. Newsom sounds… angry. Perhaps because he realizes (per the above) that his own words in support of the bill and how it should be used to block “content” are going to make him lose this case.

Enough is enough. In light of new action and findings released by the U.S. Surgeon General, I urge you to drop your lawsuit challenging California’s children’s online safety law.

Except, as we just detailed, the Surgeon General’s report does not find that the internet harms kids, and actually makes it clear that most kids benefit from social media. Straight from the report that it appears Newsom did not read:

A majority of adolescents report that social media helps them feel more accepted (58%), like they have people who can support them through tough times (67%), like they have a place to show their creative side (71%), and more connected to what’s going on in their friends’ lives (80%). In addition, research suggests that social media-based and other digitally-based mental health interventions may also be helpful for some children and adolescents by promoting help-seeking behaviors and serving as a gateway to initiating mental health care.

But, Newsom appears to have only read the headlines that misconstrue what’s in the actual report. His letter then goes into full on moral panic mode:

Every day as our children browse the internet to connect with one another, build community, and learn, they are also pushed to horrific content and exposed to data mining and location tracking. This reality is dangerous to their safety, mental health, and well-being. That’s why, last September, I was proud to sign the California Age-Appropriate Design Code Act — a bipartisan, first-in-the-nation law that protects the health and privacy of children using online platforms and prohibits online services from encouraging children to provide personal information.

Except, nearly everything in that paragraph is wrong. Embarrassingly so. There is no evidence that children are “pushed to horrific content.” It is true that there may be horrific content online, but the idea that companies are pushing kids to that content is not supported by the evidence. Furthermore, it’s rich that he’s complaining about “data mining and location tracking” while saying that this bill prohibits companies from seeking “personal information” from kids when the law’s “age assurance” requirements suggest the exact opposite. To comply with the law, websites will be effectively required to demand information from users to determine a likely age.

As I explained in my own declaration in the lawsuit, at Techdirt we have bent over backwards to learn as little about the folks who read our site as possible. But under the law, we will likely be compelled to institute a program in which we are required to determine the age of everyone who visits. In other words, the law requires more data mining, not less, and explicitly requires it for children.

Newsom continues the nonsense:

Rather than join California in protecting our children, your association, which represents major tech companies including Google, Meta, TikTok, and Twitter, chose to sue over this commonsense law. In your lawsuit, you have gone so far as to make light of the real harms our children face on the internet, trivializing this law as just being about teenagers who “say unkind things, insufficiently ‘like’ another’s posts,” or are unhappy about “the omission of a ‘trigger warning.’”

Again, nothing in this law actually protects children. Instead, it puts them at much greater risk of having information exposed, as we’ve noted. It will also make it next to impossible for children to research important information regarding mental health, or to find out the information they need to help them deal with things like eating disorders, since it will drive basically all of that content offline (at least where kids can reach it).

As for the claim that NetChoice is “trivializing this law,” that’s obviously bullshit to anyone who has read the filings in context (which apparently does not include this angry Governor Newsom). The references in that paragraph are in NetChoice’s motion for a preliminary injunction, but taken completely out of context. They’re not trivializing the issues children face: they’re pointing out that the way the law is drafted (i.e., very, very badly), it also applies to those more “trivial” situations. From the preliminary injunction filing:

AB 2273 also adopts a boundless conception of what speech must be restricted, including speech that cannot constitutionally be restricted even for minors. The requirement that services enforce their own policies, id. § 1798.99.31(a)(9), will lead them to suppress swaths of protected speech that the State could not restrict directly. See supra § IV.A.1.b. The bar on using algorithms and user information to recommend or promote content will restrict a provider’s ability to transmit protected speech based on the user’s expressed interests. And the law’s restrictions on content that might be “detrimental” or “harmful” to a child’s “well-being,” id. § 1798.99.31(a)(1)(b), (b)(1), (3)-(4), (7), could restrict expression on any topic that happens to distress any child or teen. This would include a range of important information children are constitutionally entitled to receive, such as commentary or news about the war in Ukraine, the January 6, 2021 insurrection at the United States Capitol, the 2017 “Unite the Right” rally in Charlottesville, school shootings, and countless other controversial, significant events.

More fundamentally, the “harm” the law seeks to address—that content might damage someone’s “well-being”—is a function of human communication itself. AB 2273 applies to, among other things, communications by teenagers on social media, who may say unkind things, insufficiently “like” another’s posts, or complain harshly about events at school; the use of language acceptable to some but not others; the omission of a “trigger warning”; and any other manner of discourse online. See, e.g., Mahanoy Area Sch. Dist. v. B. L., 141 S. Ct. 2038 (2021) (Snapchat post “fuck cheer” made high school students “visibly upset”)

So, no. The lawsuit is not trivializing harms children face by saying that it’s nothing more than kids saying unkind things, NetChoice is (accurately) pointing out that the broad language of the law means that it could be applied to those situations, rather than ones dealing with actual harm.

It’s pathetic and embarrassing that Newsom would imply that this paragraph was trivializing harms. His complete and total misread of what’s in the lawsuit is trivializing the seriousness of his state’s own law that is violating 1st Amendment rights.

Anyway, Newsom goes on:

Yet at the same time you are in court callously mocking this law, experts are confirming the known dangers of online platforms for kids and teens: Just days ago, the U.S. Surgeon General issued an advisory on the profound toll that social media takes on kids’ and teens’ mental health without adequate safety and privacy standards. Your association and its members may be interested to learn of the Surgeon General’s urgent findings about the sexual extortion of our children, and the alarming links between youth social media and cyberbullying, depression, suicide, and unhealthy and dangerous outcomes and behaviors.

Honestly, this is making me wonder if Newsom ever reads anything. Because, as we discussed that is not what the Surgeon General’s report says at all. It literally says that there are widespread benefits to social media and then says “we do not have enough evidence” regarding whether or not it’s harmful. It notes there are concerns, and some “correlational” studies, but nothing proving a causal link. It notes that we need more research on that point.

So how the hell is Newsom claiming that it is claiming there is a “profound toll” from social media? The report does not say that.

As for the “Surgeon General’s urgent findings about the sexual extortion of our children,” again Newsom is blatantly misstating what the report says. It notes that the internet has been used for sexual extortion, which is a fact, but nothing in the AADC will stop bad people from being terrible. The report does not say anything about this fact being “urgent” or requiring social media companies to magically make people stop being bad. It just mentions such things as the kind of problematic content that exists online.

As for the “alarming links between youth social media and cyberbullying, depression, suicide, and unhealthy and dangerous outcomes and behaviors” that’s AGAIN misreading the Surgeon General’s report. Again, it does mention those things, but does not discuss “alarming links.” It highlights correlational concerns again, and suggests further research and caution. But does not mention any sort of causal link, alarming or not.

In fact, with regards to cyberbullying, the Surgeon General’s recommendations talk about better educating teachers, parents, and children on how to deal with such things. And, its one policy recommendation around cyberbullying is not to force websites to censor content, as the AADC does, but rather to “support the development, implementation, and evaluation of digital and media literacy curricular in schools and within academic standards.”

In other words, what the Surgeon General is kinda saying is that our policy makers are the ones who have failed our kids by not teaching them how to be good digital citizens.

Governor Newsom, that one’s on you.

So, so far we have Newsom lying about the law, lying about the filings from NetChoice, and now lying about the Surgeon General’s report. I know it’s a post-truth political world we live in, but I expect better from California’s governor.

But he’s not done yet:

The harms of unregulated social media are established and clear.

The Surgeon General’s report — not to mention the even more thorough report from the American Psychological Association — literally say the opposite. They say it is not clear, and much more research needs to be done.

Governor Newsom, you should stop lying.

It is time for the tech industry to stop standing in the way of important protections for our kids and teens, and to start working with us to keep our kids safe.”

Stomping on 1st Amendment rights and lying about everything is not “keeping our kids safe” Governor.

Filed Under: aadc, ab 2273, age appropriate, data, gavin newsom, moral panic, social media, studies, surgeon general, teenagers
Companies: netchoice

Recent Case Highlights How Age Verification Laws May Directly Conflict With Biometric Privacy Laws

from the privacy-nightmare dept

California passed the California Age-Appropriate Design Code (AADC) nominally to protect children’s privacy, but at the same time, the AADC requires businesses to do an age “assurance” of all their users, children and adults alike. (Age “assurance” requires the business to distinguish children from adults, but the methodology to implement has many of the same characteristics as age verification–it just needs to be less precise for anyone who isn’t around the age of majority. I’ll treat the two as equivalent).

Doing age assurance/age verification raises substantial privacy risks. There are several ways of doing it, but the two primary options for quick results are (1) requiring consumers to submit government-issued documents, or (2) requiring consumers to submit to face scans that allow the algorithms to estimate the consumer’s age.

[Note: the differences between the two techniques may be legally inconsequential, because a service may want a confirmation that the person presenting the government documents is the person requesting access, which may essentially require a review of their face as well.]

But, are face scans really an option for age verification, or will it conflict with other privacy laws? In particular, face scanning seemingly directly conflict with biometric privacy laws, such as Illinois’ BIPA, which provide substantial restrictions on the collection, use, and retention of biometric information. (California’s Privacy Rights Act, CPRA, which the AADC supplements, also provides substantial protections for biometric information, which is classified as “sensitive” information). If a business purports to comply with the CA AADC by using face scans for age assurance, will that business simultaneously violate BIPA and other biometric privacy laws?

Today’s case doesn’t answer the question, but boy, it’s a red flag.

The court summarizes BIPA Sec. 15(b):

Section 15(b) of the Act deals with informed consent and prohibits private entities from collecting, capturing, or otherwise obtaining a person’s biometric identifiers or information without the person’s informed written consent. In other words, the collection of biometric identifiers or information is barred unless the collector first informs the person “in writing of the specific purpose and length of term for which the data is being collected, stored, and used” and “receives a written release” from the person or his legally authorized representative

Right away, you probably spotted three potential issues:

[Another possible tension is whether the business can retain face scans, even with BIPA consent, in order to show that each user was authenticated if challenged in the future, or if the face scans need to be deleted immediately, regardless of consent, to comply with privacy concerns in the age verification law.]

The primary defendant at issue, Binance, is a cryptocurrency exchange. (There are two Binance entities at issue here, BCM and BAM, but BCM drops out of the case for lack of jurisdiction). Users creating an account had to go through an identity verification process run by Jumio. The court describes the process:

Jumio’s software…required taking images of a user’s driver’s license or other photo identification, along with a “selfie” of the user to capture, analyze and compare biometric data of the user’s facial features….

During the account creation process, Kuklinski entered his personal information, including his name, birthdate and home address. He was also prompted to review and accept a “Self-Directed Custodial Account Agreement” for an entity known as Prime Trust, LLC that had no reference to collection of any biometric data. Kuklinski was then prompted to take a photograph of his driver’s license or other state identification card. After submitting his driver’s license photo, Kuklinski was prompted to take a photograph of his face with the language popping up “Capture your Face” and “Center your face in the frame and follow the on-screen instructions.” When his face was close enough and positioned correctly within the provided oval, the screen flashed “Scanning completed.” The next screen stated, “Analyzing biometric data,” “Uploading your documents”, and “This should only take a couple of seconds, depending on your network connectivity.”

Allegedly, none of the Binance or Jumio legal documents make the BIPA-required disclosures.

The court rejects Binance’s (BAM) motion to dismiss:

Jumio’s motion to dismiss also goes nowhere:

[The Sosa v. Onfido case also involved face-scanning identity verification for the service OfferUp. I wonder if the court would conduct the constitutional analysis differently if the defendant argued it had to engage with biometric information in order to comply with a different law, like the AADC?]

The court properly notes that this was only a motion to dismiss; defendants could still win later. Yet, this ruling highlights a few key issues:

1. If California requires age assurance and Illinois bans the primary methods of age assurance, there may be an inter-state conflict of laws that ought to support a Dormant Commerce Clause challenge. Plus, other states beyond Illinois have adopted their own unique biometric privacy laws, so interstate businesses are going to run into a state patchwork problem where it may be difficult or impossible to comply with all of the different laws.

2. More states are imposing age assurance/age verification requirements, including Utah and likely Arkansas. Often, like the CA AADC, those laws don’t specify how the assurance/verification should be done, leaving it to businesses to figure it out. But the legislatures’ silence on the process truly reflects their ignorance–the legislatures have no idea what technology will work to satisfy their requirements. It seems obvious that legislatures shouldn’t adopt requirements when they don’t know if and how they can be satisfied–or if satisfying the law will cause a different legal violation. Adopting a requirement that may be unfulfillable is legislative malpractice and ought to be evidence that the legislature lacked a rational basis for the law because they didn’t do even minimal diligence.

3. The clear tension between the CA AADC and biometric privacy is another indicator that the CA legislature lied to the public when it claimed the law would enhance children’s privacy.

4. I remain shocked by how many privacy policy experts and lawyers remain publicly quiet about age verification laws, or even tacitly support them, despite the OBVIOUS and SIGNIFICANT privacy problems they create. If you care about privacy, you should be extremely worried about the tsunami of age verification requirements being embraced around the country/globe. The invasiveness of those requirements could overwhelm and functionally moot most other efforts to protect consumer privacy.

5. Mandatory online age verification laws were universally struck down as unconstitutional in the 1990s and early 2000s. Legislatures are adopting them anyway, essentially ignoring the significant adverse caselaw. We are about to have a high-stakes society-wide reconciliation about this tension. Are online age verification requirements still unconstitutional 25 years later, or has something changed in the interim that makes them newly constitutional? The answer to that question will have an enormous impact on the future of the Internet. If the age verification requirements are now constitutional despite the legacy caselaw, legislatures will ensure that we are exposed to major privacy invasions everywhere we go on the Internet–and the countermoves of consumers and businesses will radically reshape the Internet, almost certainly for the worse.

Reposted with permission from Eric Goldman’s Technology & Marketing Law Blog.

Filed Under: aadc, ab 2273, age assurance, age verification, biometric, biometric privacy, bipa, california, illinois, privacy
Companies: binance, jumio

I Explained To A Court How California’s ‘Kid’s Code’ Is Both Impossible To Comply With & An Attack On Our Expression

from the the-wrong-approach dept

Last year, Techdirt was one of only a very few sites where you could find out information on California’s AB 2273, officially the “California Age Appropriate Design Code” or “Kid’s code.” As with so many bills that talk about “protecting the children,” everyone we talked to said they were afraid to speak up, because they worried that they’d be branded as being against child safety. Indeed, I even had some people within some larger tech companies reach out to me suggesting it was dangerous to speak out against the bill.

But the law is ridiculous. Last August, I explained how it was literally impossible to comply with the bill, questioned why California lawmakers were willing to pass a law written by a British Baroness (who is also a Hollywood filmmaker) with little to no understanding of how any of this actually works, and highlighted how the age verification requirements would be a privacy nightmare putting more kids at risk, rather than protecting them. Eric Goldman also pointed out the dark irony, that while the Kid’s Code claims that it was put in place to prevent internet companies from conducting radical experiments on children, the bill itself is an incredibly radical experiment in trying to reshape the internet. Of course, the bill was signed into law last fall.

In December, NetChoice, which brought the challenges to Texas and Florida’s bad internet laws, sued to block the law. Last week, they filed for a preliminary injunction to block the law from going into effect. Even though the law doesn’t officially take effect until the summer of 2024, any website would need to start doing a ton of work to get ready. With the filing, there were a series of declarations filed from various website owners to highlight the many, many problems this law will create for sites (especially smaller sites). Among those declarations was the one I filed highlighting how this law is impossible to comply with, would invade the privacy of the Techdirt community, and act as an unconstitutional restriction on speech. But we’ll get to that.

First up, the motion for the injunction. It’s worth reading the whole thing as it details the myriad ways in which this law is unconstitutional. It violates the 1st Amendment by creating prior restraint in multiple ways. The law is both extremely vague and overly broad. It regulates speech based on its content (again violating the 1st Amendment). It also violates the Commerce Clause as a California law that would impact those well outside of the state. Finally, existing federal law, both COPPA and Section 230 pre-empt the law. I won’t go through it all, but all of those are clearly laid out in the motion.

But what I appreciate most is that it opens up with a hypothetical that should illustrate just how obviously unconstitutional the law is:

Imagine a law that required bookstores, before offering books and services to the public, to assess whether those books and services could “potentially harm” their youngest patrons; develop plans to “mitigate or eliminate” any such risks; and provide those assessments to the state on demand. Under this law, bookstores could only carry books the state deemed “appropriate” for young children unless they verified the age of each patron at the door. Absent such age verification, employees could not ask customers about the types of books they preferred or whether they had enjoyed specific titles—let alone recommend a book based on customers’ expressed interests—without a “compelling” reason that doing so was in the “best interests” of children. And the law would require bookstores to enforce their store rules and content standards to the state’s satisfaction, eliminating the bookstores’ discretion as to how those rules should be applied. Penalties for violations could easily bankrupt even large bookstores. Such a scheme would plainly violate fundamental constitutional protections.

California has enacted just such a measure: The California Age Appropriate Design Code Act (AB 2273). Although billed as a “data protection” regulation to protect minors, AB 2273 is the most extensive attempt by any state to censor speech since the birth of the internet. It does this even though the State has conceded that an open, vibrant internet is indispensable to American life. AB 2273 enacts a system of prior restraint over protected speech using undefined, vague terms, and creates a regime of proxy censorship, forcing online services to restrict speech in ways the State could never do directly. The law violates the First Amendment and the Commerce Clause, and is preempted by the Children’s Online Privacy Protection Act (COPPA), 15 U.S.C. §§ 6501 et seq., and Section 230 of the Communications Decency Act, 47 U.S.C. § 230. Because AB 2273 forces online providers to act now to redesign services, irrespective of its formal effective date, it will cause imminent irreparable harm. The Court should enjoin the statute.

As for my own filing, it was important for me to make clear that a law like AB 2273 is a direct attack on Techdirt and its users’ expression.

Techdirt understands that AB 2273 will require covered businesses to evaluate and mitigate the risk that “potentially harmful content” will reach children, with children defined to equally cover every age from 0 to 18 despite the substantial differences in developmental readiness and ability to engage in the world around them throughout that nearly two-decade age range. This entire endeavor results in the State directly interfering with my company’s and my expressive rights by limiting to whom and how we can communicate to others. I publish Techdirt with the deliberate intention to share my views (and those of other authors) with the public. This law will inhibit my ability to do so in concrete and measurable ways.

In addition to its overreaching impact, the law’s prohibitions also create chilling ambiguity, such as in its use of the word “harm.” In the context of the issues that Techdirt covers on a daily basis, there is no feasible way that Techdirt can determine whether any number of its articles could, in one way or another, expose a child to “potentially harmful” content, however the State defines that phrase according to the political climate of the moment. For example, Techdirt covers a broad array of hot-button topics, including reporting on combating police brutality (sometimes with accompanying images and videos), online child sexual abuse, bullying, digital sexual harassment, and law enforcement interrogations of minors—all of which could theoretically be deemed by the State to be “potentially harmful” to children. Moreover, Techdirt’s articles are known for their irreverent and snarky tone, and frequently use curse words in their content and taglines. It would be impossible to know whether this choice of language constitutes “potentially harmful content” given the absence of any clear definition of the term in AB 2273. Screening Techdirt’s forum for “potentially harmful” content—and requiring Techdirt to self-report the ways its content and operations could hypothetically “harm” children—will thus cause Techdirt to avoid publishing or hosting content that could even remotely invite controversy, undermining Techdirt’s ability to foster lively and uninhibited debate on a wide range of topics of its choosing. Moreover, not only would Techdirt’s prospective expression be chilled, but the retroactive application of AB 2273 would result in Techdirt needing to censor its previous expression, and to an enormous degree. The sheer number of posts and comments published on Techdirt makes the self-assessment needed to comply with the law’s ill-defined rules functionally impossible, requiring an enormous allocation of resources that Techdirt is unable to dedicate.

Also, the age verification requirements would fundamentally put the privacy of all of our readers at risk by forcing us to collect data we do not want about our users, and which we’ve gone to great lengths to make sure is not collected.

Redesigning our publication to verify the ages of our readers would also compromise our deliberate practice to minimize how much data we collect and retain about our readers to both limit our obligations that would arise from the handling of such data as well as preserve trust with our readers and undermine our relationship with our readers of any age, including teenagers, by subjecting them to technologies that are at best, unreliable, and at worst, highly privacy-intrusive (such as facial recognition). Moreover, because a sizeable portion of Techdirt’s readership consists of casual readers who access the site for information and news, any requirement that forces users to submit extensive personal information simply to access Techdirt’s content risks driving away these readers and shrinking Techdirt’s audience.

I have no idea how the courts are going to treat this law. Again, it does feel like many in the industry have decided to embrace and support this kind of regulation. I’ve heard from too many people inside the industry who have said not to speak up about it. But it’s such a fundamentally dangerous bill, with an approach that we’re starting to see show up in other states, that it was too important not to speak up.

Filed Under: 1st amendment, aadc, ab 2273, age appropriate design code, age verification, facial scanning, free expression, kids code, privacy
Companies: netchoice