dpia – Techdirt (original) (raw)

CA Governor Newsom And AG Bonta Pretend Court Agreed With Them On Kids Code

from the you-don't-have-to-do-this dept

Dear California Governor Newsom and Attorney General Bonta: you really don’t have to be the opposite end of the extremists in Florida and Texas. You don’t have to lie to your constituents and pretend losses are wins. Really. Trust me.

You may recall that the Attorneys General of Texas and Florida have taken to lying to the public when they lose big court cases. There was the time when Texas AG Ken Paxton claimed “a win” when the Supreme Court did exactly the opposite of what he had asked it to do.

Or the time when Florida AG Ashley Moody declared victory after the Supreme Court made it quite clear that Florida’s social media law was unconstitutional, but sent it back to the lower court to review on procedural grounds.

And now it looks like Newsom and Bonta are doing the same sort of thing, claiming victory out of an obvious loss, just on the basis of some procedural clean-up (ironically, the identical procedural clean-up that Moody declared victory over).

As you’ll recall, we just wrote about the Ninth Circuit rejecting California’s Age Appropriate Design Code (AADC) as an obvious First Amendment violation (just as we had warned both Bonta and Newsom, only to be ignored). However, because of the results in the Supreme Court decision in Moody, the Ninth Circuit sent some parts of the law back to the lower court.

The details here are kind of important. In the Moody decision, the Supreme Court said for there to be a “facial challenge” against an entire law (i.e., a lawsuit saying “this whole law is unconstitutional, throw it out”), the lower courts have to consider every part of the law and whether or not every aspect and every possible application is unconstitutional. In the Texas and Florida cases, the Supreme Court noted that the lower courts really only reviewed parts of those laws and how they might impact a few companies, rather than really evaluating whether or not some of the laws were salvageable.

However, the ruling also made quite clear that any law that seeks to tell social media companies how to moderate is almost certainly a violation of the First Amendment.

In the challenge to the AADC, most of the case (partly at the request of the district court judge!) focused on the “Data Protection Impact Assessment” (DPIA) requirements of the law. This was the main part of the law, and the part that would require websites to justify every single feature they offer and explain how they will “mitigate” any potential risks to kids. The terrible way that this was drafted would almost certainly require websites to come up with plans to remove content the California government disapproved of, as both courts recognized.

But the AADC had a broader scope than the DPIA section.

So, the Ninth Circuit panel sent part of the law back to the lower court following the requirements in the Moody ruling. They said the lower court had to do the full facial challenge, exploring the entirety of the law and how it might be applied, rather than throwing out the whole law immediately.

However (and this is the important part), the Ninth Circuit said that on the DPIA point specifically, which is the crux of the law, there was enough briefing and analysis to show that it was obviously a violation of the First Amendment. It upheld the injunction barring that part of the law from going into effect.

That doesn’t mean the rest of the law is good or constitutional. It just means that now the lower court will need to examine the rest of the law and how it might be applied before potentially issuing another injunction.

In no way and in no world is this a “win” for California.

But you wouldn’t know that to hear Newsom or Bonta respond to the news. They put out a statement that suggests they either don’t know what they’re talking about or they’re hoping the public is too stupid to realize this. It’s very likely the latter, but it’s a terrible look for both Newsom and Bonta. It suggests they’re so deep in their own bullshit that they can’t be honest with the American public. They supported an unconstitutional bill that has now been found to be unconstitutional by both the district and the appeals court.

First up, Newsom:

“California enacted this nation-leading law to shield kids from predatory practices. Instead of adopting these commonsense protections, NetChoice chose to sue — yet today, the Court largely sided with us. It’s time for NetChoice to drop this reckless lawsuit and support safeguards that protect our kids’ safety and privacy.”

Except, dude, they did not “largely side” with you. They largely sided with NetChoice and said there’s not enough briefing on the rest. I mean, read the fucking ruling, Governor:

We agree with NetChoice that it is likely to succeed in showing that the CAADCA’s requirement that covered businesses opine on and mitigate the risk that children may be exposed to harmful or potentially harmful materials online, Cal. Civ. Code §§ 1798.99.31(a)(1)–(2), facially violates the First Amendment. We therefore affirm the district court’s decision to enjoin the enforcement of that requirement, id., and the other provisions that are not grammatically severable from it…

And, no, the law does not shield kids from predatory practices. That’s the whole point that both courts have explained to you: the law pressures websites to remove content, not change conduct.

So, why would NetChoice drop this lawsuit that it is winning? Especially when letting this law go into effect will not protect kids’ safety and privacy, and would actually likely harm both, by encouraging privacy-destroying age verification?

As for Bonta:

“We’re pleased that the Ninth Circuit reversed the majority of the district court’s injunction, which blocked California’s Age-Appropriate Design Code Act from going into effect. The California Department of Justice remains committed to protecting our kids’ privacy and safety from companies that seek to exploit their online experiences for profit.”

Yeah, again, it did not “reverse the majority.” It upheld the key part, and the only part that was really debated in the lower court. It sent the rest back to be briefed on, and it could still be thrown out once the judges see what nonsense you’ve been pushing.

It wasn’t entirely surprising when Paxton and Moody pulled this kind of shit. After all, the GOP has made it clear that they’re the party of “alternative facts.” But the Democrats don’t need to do the same at the other end of the spectrum. We’ve already seen that Newsom’s instincts are to copy the worst of the GOP, but in favor of policies he likes. This is unfortunate. We don’t need insufferable hacks running both major political parties.

Look, is it so crazy to just ask for our politicians to not fucking lie to their constituents? If they can’t be honest about basic shit like this, what else are they lying about? You lost a case because you supported a bad law. Suck it up and admit it.

Filed Under: 9th circuit, aadc, ab 2273, california, dpia, facial challenge, gavin newsom, lies, rob bonta
Companies: netchoice

Court Sees Through California’s ‘Protect The Children’ Ruse, Strikes Down Kids Code

from the gee,-who-could-have-predicted dept

Friday morning gave us a nice victory for free speech in the 9th Circuit, where the appeals court panel affirmed most of the district court’s ruling finding California’s “Age Appropriate Design Code” unconstitutional as it regulated speech.

There’s a fair bit of background here that’s worth going over, so bear with me. California’s Age Appropriate Design Code advanced through the California legislature somewhat quietly, with little opposition. Many of the bigger companies, like Meta and Google, were said to support it, mainly because they knew they could easily comply with their buildings full of lawyers, whereas smaller competitors would be screwed.

Indeed, for a period of time it felt like only Professor Eric Goldman and I were screaming about the problems of the law. The law was drafted in part by a British Baroness and Hollywood movie director who fell deep for the moral panic that the internet and mobile phones are obviously evil for kids. Despite the lack of actual evidence supporting this, she has been pushing for laws in the UK and America to suppress speech she finds harmful to kids.

In the US, some of us pointed out how this violates the First Amendment. I also pointed out that the law is literally impossible to comply with for smaller sites like Techdirt.

The Baroness and the California legislators (who seem oddly deferential to her) tried to get around the obvious First Amendment issues by insisting that the bill was about conduct and design and not about speech. But as we pointed out, that was obviously a smokescreen. The only way to truly comply with the law was to suppress speech that politicians might later deem harmful to children.

California Governor Gavin Newsom eagerly signed the bill into law, wanting to get some headlines about how he was “protecting the children.” When NetChoice challenged the law, Newsom sent them a very threatening letter, demanding they drop the lawsuit. Thankfully, they did not, and the court saw through the ruse and found the entire bill unconstitutional for the exact reasons we had warned the California government about.

The judge recognized that the bill required the removal of speech, despite California’s claim that it was about conduct and privacy. California (of course) appealed, and now we have the 9th Circuit which has mostly (though not entirely) agreed with the district court.

The real wildcard in all of this was the Supreme Court’s decision last month in what is now called the Moody case, which also involved NetChoice challenging Florida’s and Texas’ social media laws. The Supreme Court said that the cases should be litigated differently as a “facial challenge” rather than an “as-applied challenge” to the law. And it seems that decision is shaking up a bunch of these cases.

But here, the 9th Circuit interpreted it to mean that it could send part of the case back down to the lower court to do a more thorough analysis on some parts of the AADC that weren’t as clearly discussed or considered. In a “facial challenge,” the courts are supposed to consider all aspects of the law, and whether or not they all violate the Constitution, or if some of them are salvageable.

On the key point, though, the 9th Circuit panel rightly found that the AADC violates the First Amendment. Because no matter how much California claims that it’s about conduct, design, or privacy, everyone knows it’s really about regulating speech.

Specifically, they call out the DPIA requirement. This is a major portion of the law, which requires certain online businesses to create and file a “Data Protection Impact Assessment” with the California Attorney General. Part of that DPIA is that you have to explain how you plan to “mitigate the risk” that “potentially harmful content” will reach children (defined as anyone from age 0 to 18).

And we’d have to do that for every “feature” on the website. Do I think that a high school student might read Techdirt’s comments and come across something the AG finds harmful? I need to first explain our plans to “mitigate” that risk. That sure sounds like a push for censorship.

And the Court agrees this is a problem. First, it’s a problem because of the compelled speech part of it:

We agree with NetChoice that the DPIA report requirement, codified at §§ 1798.99.31(a)(1)–(2) of the California Civil Code, triggers review under the First Amendment. First, the DPIA report requirement clearly compels speech by requiring covered businesses to opine on potential harm to children. It is well-established that the First Amendment protects “the right to refrain from speaking at all.”

California argued that because the DPIA reports are not public, it’s not compelled speech, but the Court (rightly) says that’s… not a thing:

The State makes much of the fact that the DPIA reports are not public documents and retain their confidential and privileged status even after being disclosed to the State, but the State provides no authority to explain why that fact would render the First Amendment wholly inapplicable to the requirement that businesses create them in the first place. On the contrary, the Supreme Court has recognized the First Amendment may apply even when the compelled speech need only be disclosed to the government. See Ams. for Prosperity Found. v. Bonta, 594 U.S. 595, 616 (2021). Accordingly, the district court did not err in concluding that the DPIA report requirement triggers First Amendment scrutiny because it compels protected speech.

More importantly, though, the Court recognizes that the entire underlying purpose of the DPIA system is to encourage websites to remove First Amendment-protected content:

Second, the DPIA report requirement invites First Amendment scrutiny because it deputizes covered businesses into serving as censors for the State. The Supreme Court has previously applied First Amendment scrutiny to laws that deputize private actors into determining whether material is suitable for kids. See Interstate Cir., Inc. v. City of Dallas, 390 U.S. 676, 678, 684 (1968) (recognizing that a film exhibitor’s First Amendment rights were implicated by a law requiring it to inform the government whether films were “suitable” for children). Moreover, the Supreme Court recently affirmed “that laws curtailing [] editorial choices [by online platforms] must meet the First Amendment’s requirements.” Moody, 144 S. Ct. at 2393.

The state’s argument that this analysis is unrelated to the underlying content is easily dismissed:

At oral argument, the State suggested companies could analyze the risk that children would be exposed to harmful or potentially harmful material without opining on what material is potentially harmful to children. However, a business cannot assess the likelihood that a child will be exposed to harmful or potentially harmful materials on its platform without first determining what constitutes harmful or potentially harmful material. To take the State’s own example, data profiling may cause a student who conducts research for a school project about eating disorders to see additional content about eating disorders. Unless the business assesses whether that additional content is “harmful or potentially harmful” to children (and thus opines on what sort of eating disorder content is harmful), it cannot determine whether that additional content poses a “risk of material detriment to children” under the CAADCA. Nor can a business take steps to “mitigate” the risk that children will view harmful or potentially harmful content if it has not identified what content should be blocked.

Accordingly, the district court was correct to conclude that the CAADCA’s DPIA report requirement regulates the speech of covered businesses and thus triggers review under the First Amendment.

I’ll note that this is an issue that is coming up in lots of other laws as well. For example, KOSA has defenders who insist that it is only focused on design, and not content. But at the same time, it talks about preventing harms around eating disorders, which is fundamentally a content issue, not a design issue.

The Court says that the DPIA requirement triggers strict scrutiny. The district court ruling had looked at it under intermediate scrutiny (a lower bar), found that it didn’t pass that bar, and said even if strict scrutiny is appropriate, it wouldn’t pass since it couldn’t even meet the lower bar. The Appeals court basically says we can jump straight to strict scrutiny:

Accordingly, the court assumed for the purposes of the preliminary injunction “that only the lesser standard of intermediate scrutiny for commercial speech applies” because the outcome of the analysis would be the same under both intermediate commercial speech scrutiny and strict scrutiny. Id. at 947–48. While we understand the district court’s caution against prejudicing the merits of the case at the preliminary injunction stage, there is no question that strict scrutiny, as opposed to mere commercial speech scrutiny, governs our review of the DPIA report requirement.

And, of course, the DPIA requirement fails strict scrutiny in part because it’s obviously not the least speech restrictive means of accomplishing its goals:

The State could have easily employed less restrictive means to accomplish its protective goals, such as by (1) incentivizing companies to offer voluntary content filters or application blockers, (2) educating children and parents on the importance of using such tools, and (3) relying on existing criminal laws that prohibit related unlawful conduct.

In this section, the court also responds to the overhyped fears that finding the DPIAs unconstitutional here would mean that they are similarly unconstitutional in other laws, such as California’s privacy law. But the court says “um, guys, one of these is about speech, and one is not.”

Tellingly, iLit compares the CAADCA’s DPIA report requirement with a supposedly “similar DPIA requirement” found in the CCPA, and proceeds to argue that the district court’s striking down of the DPIA report requirement in the CAADCA necessarily threatens the same requirement in the CCPA. But a plain reading of the relevant provisions of both laws reveals that they are not the same; indeed, they are vastly different in kind.

Under the CCPA, businesses that buy, receive, sell, or share the personal information of 10,000,000 or more consumers in a calendar year are required to disclose various metrics, including but not limited to the number of requests to delete, to correct, and to know consumers’ personal information, as well as the number of requests from consumers to opt out of the sale and sharing of their information. 11 Cal. Code Regs. tit. 11, § 7102(a); see Cal Civ. Code § 1798.185(a)(15)(B) (requiring businesses to conduct regular risk assessments regarding how they process “sensitive personal information”). That obligation to collect, retain, and disclose purely factual information about the number of privacy-related requests is a far cry from the CAADCA’s vague and onerous requirement that covered businesses opine on whether their services risk “material detriment to children” with a particular focus on whether they may result in children witnessing harmful or potentially harmful content online. A DPIA report requirement that compels businesses to measure and disclose to the government certain types of risks potentially created by their services might not create a problem. The problem here is that the risk that businesses must measure and disclose to the government is the risk that children will be exposed to disfavored speech online.

Then, the 9th Circuit basically gives up on the other parts of the AADC. The court effectively says that since the briefing was so focused on the DPIA part of the law, and now (thanks to the Moody ruling) a facial challenge requires a full exploration of all aspects of the law, the rest should be sent back to the lower court:

As in Moody, the record needs further development to allow the district court to determine “the full range of activities the law[] cover[s].” Moody, 144 S. Ct. at 2397. But even for the remaining provision that is likely to trigger First Amendment scrutiny in every application because the plain language of the provision compels speech by covered businesses, see Cal. Civ. Code §§ 1798.99.31(a)(7), we cannot say, on this record, that a substantial majority of its applications are likely to fail First Amendment scrutiny.

For example, the Court notes that there’s a part of the law dealing with “dark patterns” but there’s not enough information to know whether or not that could impact speech or not (spoiler alert: it absolutely can and will).

Still, the main news here is this: the law is still not going into effect. The Court recognizes that the DPIA part of the law is pretty clearly an unconstitutional violation of the First Amendment (just as some of us warned Newsom and the California legislature).

Maybe California should pay attention next time (he says sarcastically as a bunch of new bad bills are about to make their way to Newsom’s desk).

Filed Under: 9th circuit, aadc, ab 2273, age appropriate design code, california, dpia, gavin newsom, protect the children, rob bonta
Companies: netchoice

Judge Seems Skeptical That California’s Age Appropriate Design Code Is Compatible With The 1st Amendment

from the fingers-crossed dept

We’ve talked a few times about California’s “Age Appropriate Design Code.” This is a bill in California that was “sponsored” and pushed for by a UK Baroness (who is also a Hollywood filmmaker and has fallen for moral panic myths about evil technology). As we explained there is no way for a site like Techdirt to comply with the law. The law is vague and has impossible standards.

While the law says it does not require age verification, it does in effect. It says you have to reasonably “estimate” the age of visitors to a website (something we have zero ability to do, and I have no desire to collect such info), and then do an analysis of every feature on our website to see how it might cause harm to children, as well as put together a plan to “mitigate” such harm. If a site refuses to do “age estimation” (i.e., verification) then it must implement policies that apply to every visitor that tries to mitigate harm to minors.

As professor Eric Goldman noted, this bill is a radical experiment on children, from a legislature that claims it’s trying to stop radical experiments on children. As I discussed earlier this year, I submitted a declaration in the lawsuit to invalidate the law, filed by the trade group NetChoice, explaining how the law is a direct attack on Techdirt’s expression.

This past Thursday afternoon, I went to the courthouse in San Jose to watch the oral arguments regarding NetChoice’s motion for a preliminary injunction. I was pretty nervous as to how it would go, because even well-meaning people sometimes put up blinders when people start talking about “protecting the children,” never stopping to look closely at the details.

I came out of the hearing very cautiously optimistic. Now, I always say that you should never read too much into the types of questions a judge asks during oral arguments, but Judge Beth Labson Freeman (as I’ve seen her do in other cases as well) kicked off the hearing by being quite upfront with everyone, telling them where her mind was after reading all the filings in the case. In effect, she said the key issue in her mind, was whether or not the AADC actually regulates speech. If it does, then it’s probably an unconstitutional infringement of the 1st Amendment. If it does not, then it’s probably allowed. She specifically said, if she determines that the law regulates speech, then the law is clearly not content neutral, which it would need to be to survive strict scrutiny under the 1st Amendment.

So she asked the attorneys to focus on that aspect, though said there would be time to cover some of the other arguments as well.

She also noted that, of course, keeping children safe online was a laudable goal, and she was sure that everyone supported that goal. And she noted that the fact that the law was passed unanimously “weighed heavily” on her thinking. However, at the end of the day, her role is not to determine if the law is a good law, but just if it’s constitutional.

While California argued that the law doesn’t impact speech, and only “data management,” the judge seemed skeptical. She pushed back multiple times on California Deputy Attorney General Elizabeth Watson, who handled the arguments for the state. For NetChoice, Ambika Kumar pointed out how nearly every part of the law focused on content, and even that the declarations the state offered up from “experts,” as well as statements made by state officials about the law, all focused on the problems of “harmful content.”

The state kept trying to insist that the law only applied to the “design” of a website, not the content, but the judge seemed skeptical that you could raw that line. At one point, she noted that the “design” of the NY Times includes the content of the articles.

California tried to argue that the requirements to do a Data Protection Impact Assessment (DPIA) for every feature was both simple and that since there was no real enforcement mechanism, you couldn’t get punished over having every DPIA just say “there’s no impact.” They also claimed that while it does require a “timed plan” to “mitigate or eliminate” any risk, that again, it was up to the sites to determine what that is.

This left Judge Freeman somewhat incredulous, saying that basically the state of California was telling every company to fill out every DPIA saying that there was no risk to anything they did, and if they did see any risk to create a plan that says “we’ll solve this in 50 years” since that is a “timed plan.” She questioned why California would say such a thing. She highlighted that this seemed to suggest the law was too vague, which would be a 1st Amendment issue.

The judge also clearly called out that the law suggests kids accessing harmful content should be prevented, and questioned how this wasn’t a speech regulation. At one point she highlighted that, as a parent, what if you want your kids to read stories in the NY Times that might upset a child, shouldn’t that be up to the parents, not the state?

The state also kept trying to argue that websites “have no right” to collect data, and the judge pointed out that they cite no authority for that. The discussion turned, repeatedly, to the Supreme Court’s ruling in Sorrell v. IMS Health regarding 1st Amendment rights of companies to sell private data regarding pharmaceuticals for marketing. The judge repeatedly seemed to suggest that Sorrell strongly supported NetChoice’s argument, while California struggled to argue that case was different.

At one point, in trying to distinguish Sorrell from this law, California argued that Sorrell was about data about adults, and this bill is about kids (a “won’t you just think of the children” kind of argument) and the judge wasn’t buying it. She pointed out that we already have a federal law in COPPA that gives parents tools to help protect their children. The state started to talk about how hard it was for parents to do so, and the judge snapped back, asking if there was a 1st Amendment exception for when things are difficult for parents.

Later, California tried again to say that NetChoice has to show why companies have a right to data, and the judge literally pointed out that’s not how the 1st Amendment works, saying that we don’t “start with prohibition” and then make entities prove they have a right to speak.

Another strong moment was when the judge quizzed the state regarding the age verification stuff. California tried to argue that companies already collect age data (note: we don’t!) and all the law required them to do was to use that data they already collected to treat users they think are kids differently. But, the judge pushed back and noted that the law effectively says you have to limit access to younger users. California said that businesses could decide for themselves, and the judge jumped in to say that the statute says that companies must set defaults to the level most protective of children, saying: “So, we can only watch Bugs Bunny? Because that’s how I see it,” suggesting that the law would require the Disneyfication of the web.

There was also a fair bit of discussion about a provision in the law requiring companies to accurately enforce their terms of service. NetChoice pointed out that this was also a 1st Amendment issue, because if a site put in their terms that it does not allow speech that is “in poor taste,” and the Attorney General enforces the law, saying that the site did not enforce that part of its terms, then that means the state is determining what is, and what is not, in poor taste, which is a 1st Amendment issue. California retorted that there needs to be someway to deal with a site saying that it won’t collect data, but then it does. And the judge pointed out that, in that case there might be a breach of contract claim, or that the AG already has the power to go after companies using California’s Unfair Competition Law that bars deceptive advertising (raising the question of why they need this broad and vague law).

There were some other parts of the discussion, regarding if the judge could break apart the law, leaving some of it in place and dumping other parts. There was a fair amount of discussion about the scrutiny to apply if the judge finds that the law regulates speech, and how the law would play out under such scrutiny (though, again, the judge suggested a few times, that the law was unlikely to survive either intermediate scrutiny).

There was also some talk about the Dormant Commerce Clause, which the Supreme Court just somewhat limited. There, NetChoice brought up that the law could create real problems, since it applies to “California residents,” and that’s true even if they’re out of state. That means, the law could conflict with another law where a California resident was visiting. Or create a situations where companies would need to know a user was a California resident even if out of state.

The state tried to brush this aside, saying it was such a kind of edge case, and suggested it was silly to think that the Attorney General would try to enforce such a case. This did not impress the judge who noted she can’t consider the likelihood of enforcement in reviewing a challenge to the constitutionality of the law. She has to assume that the reason the state is passing a law is that it will enforce every violation of the law.

Section 230 was mostly not mentioned, as the judge noted that it seemed too early for such a discussion, especially if the 1st Amendment handled the issue. She did note that 230 issues might come up if she allowed the law to go into effect and the state then brought actions against companies, they might be able to use 230 to get such actions dismissed.

Also, there was a point where, when exploring the “speech v. data” question, NetChoice (correctly) pointed out that social media companies publish a ton of user content, and the judge (incorrectly) said “but under 230 it says they’re not publishers,” leading Kumar to politely correct the judge, that it says you can’t treat the company as a publisher, but that doesn’t mean it’s not a publisher.

At another point, the judge also questioned how effective such a law could be (as part of the strict scrutiny analysis), noting that there was no way kids were going to stop using social media, even if the state tried to ban it entirely.

As I said up top, I came out of the hearing cautiously optimistic. The judge seemed focused on the 1st Amendment issues, and spent the (fairly long!) hearing digging in on each individual point that would impact that 1st Amendment analysis (many of which I didn’t cover here, as this is already long enough…).

The judge did note she doesn’t expect her ruling to come out that quickly, and seemed relieved that the law doesn’t go into effect until next summer, but NetChoice (rightly!) pointed out that the effects of the law are already being felt, as companies need to prepare for the law to go into effect, which seemed to take the judge by surprise.

There’s also a bit more supplemental briefing that the judge requested, which will take place next month. So… guessing it’ll be a while until we hear how the judge decides (at which point it will be appealed to the 9th Circuit anyway).

Filed Under: 1st amendment, aadc, ab 2273, age estimation, age verification, california, commerce clause, dpia, free speech, prior restraint, protect the children, section 230
Companies: netchoice