age assurance – Techdirt (original) (raw)

Utah’s ‘Protect The Kids Online!’ Law Rejected By Court

from the utah-does-it-again dept

Over the last few years, politicians in Utah have been itching to pass terrible internet legislation. Some of you may forget that in the earlier part of the century, Utah became somewhat famous for passing absolutely terrible internet laws that the courts then had to clean up. In the last few years, it’s felt like other states have passed Utah, and maybe its lawmakers were getting a bit jealous in losing their “we pass batshit crazy unconstitutional internet laws” crown.

So, two years ago, they started pushing a new round of such laws. Even as they were slamming social media as dangerous and evil, Utah Governor Spencer Cox proudly signed the new law, streaming it on all the social media sites he insisted were dangerous. When Utah was sued by NetChoice, the state realized that the original law was going to get laughed out of court and they asked for a do-over, promising that they were going to repeal and replace the law with something better. The new law changed basically nothing, though, and an updated lawsuit (again by NetChoice) was filed.

The law required social media companies to engage in “age assurance” (which is just a friendlier name for age verification, but still a privacy nightmare) and then restrict access to certain types of content and features for “minor accounts.”

Cox also somewhat famously got into a fight on ExTwitter with First Amendment lawyer Ari Cohn. When Cohn pointed out that the law clearly violates the First Amendment, Cox insisted: “Can’t wait to fight this lawsuit. You are wrong and I’m excited to prove it.” When Cohn continued to point out the law’s flaws, Cox responded “See you in court.”

The Twitter exchange between Cohn and Cox as described above with Cox concluding "see you in court."

In case you’re wondering how the lawsuit is going, last night Ari got to post an update:

Ari Cohn quote tweeting Cox's "See you in court" tweet, and saying "ope" with a screenshot of the conclusion from the court enjoining the law as unconstitutional.

The law is enjoined. The court found it to likely be unconstitutional, just as Ari and plenty of other First Amendment experts expected. This case has been a bit of a roller coaster, though. A month and a half ago, the court said that Section 230 preemption did not apply to the case. The analysis on that made no sense. As we just saw, a court in Texas threw out a very similar law and said that since it tried to limit how sites could moderate content, it was preempted by Section 230. But, for a bunch of dumb reasons, the judge here, Robert Shelby, argued that the law wasn’t actually trying to impact content moderation (even though it clearly was).

But, that was only part of the case. The latest ruling found that the law almost certainly violates the First Amendment anyway:

NetChoice’s argument is persuasive. As a preliminary matter, there is no dispute the Act implicates social media companies’ First Amendment rights. The speech at issue in this case— the speech social media companies engage in when they make decisions about how to construct and operate their platforms—is protected speech. The Supreme Court has long held that “[a]n entity ‘exercis[ing] editorial discretion in the selection and presentation’ of content is ‘engage[d] in speech activity’” protected by the First Amendment. And this July, in Moody v. NetChoice, LLC, the Court affirmed these First Amendment principles “do not go on leave when social media are involved.” Indeed, the Court reasoned that in “making millions of . . . decisions each day” about “what third-party speech to display and how to display it,” social media companies “produce their own distinctive compilations of expression.”

Furthermore, following on the Supreme Court’s ruling earlier this year in Moody about whether or not the entire law can be struck down on a “facial” challenge, the court says “yes” (this issue has recently limited similar rulings in Texas and California):

NetChoice has shown it is substantially likely to succeed on its claim the Act has “no constitutionally permissible application” because it imposes content-based restrictions on social media companies’ speech, such restrictions require Defendants to show the Act satisfies strict scrutiny, and Defendants have failed to do so.

Utah tries to argue that this law is not about speech and content, but rather about conduct and “structure,” as California did in challenges to its “kids code” law. The court is not buying it:

Defendants respond that the Definition contemplates a social media service’s “structure, not subject matter.” However, Defendants’ argument emphasizes the elements of the Central Coverage Definition that relate to “registering accounts, connecting accounts, [and] displaying user-generated content” while ignoring the “interact socially” requirement. And unlike the premises-based distinction at issue in City of Austin, the social interaction-based distinction does not appear designed to inform the application of otherwise content-neutral restrictions. It is a distinction that singles out social media companies based on the “social” subject matter “of the material [they] disseminate[].” Or as Defendants put it, companies offering services “where interactive, immersive, social interaction is the whole point.”

The court notes that Utah seems to misunderstand the issue, and finds the idea that this law is content neutral to be laughable:

Defendants also respond that the Central Coverage Definition is content neutral because it does not prevent “minor account holders and other users they connect with [from] discuss[ing] any topic they wish.” But in this respect, Defendants appear to misunderstand the essential nature of NetChoice’s position. The foundation of NetChoice’s First Amendment challenge is not that the Central Coverage Definition restricts minor social media users’ ability to, for example, share political opinions. Rather, the focus of NetChoice’s challenge is that the Central Coverage Definition restricts social media companies’ abilities to collage user-generated speech into their “own distinctive compilation[s] of expression.”

Moreover, because NetChoice has shown the Central Coverage Definition facially distinguishes between “social” speech and other forms of speech, it is substantially likely the Definition is content based and the court need not consider whether NetChoice has “point[ed] to any message with which the State has expressed disagreement through enactment of the Act.”

Given all that, strict scrutiny applies, and there’s no way this law passes strict scrutiny. The first prong of the test is whether or not there’s a compelling state interest in passing such a law. And even though it’s about the moral panic of kids on the internet, the court says there’s a higher bar here. Because we’ve done this before, with California trying to regulate video games, which the Supreme Court struck down fourteen years ago:

To satisfy this exacting standard, Defendants must “specifically identify an ‘actual problem’ in need of solving.” In Brown v. Entertainment Merchants Association, for example, the Supreme Court held California failed to demonstrate a compelling government interest in protecting minors from violent video games because it lacked evidence showing a causal “connection between exposure to violent video games and harmful effects on children.” Reviewing psychological studies California cited in defense of its position, the Court reasoned research “show[ed] at best some correlation between exposure to violent entertainment” and “real-world effects.” This “ambiguous proof” did not establish violent videogames were such a problem that it was appropriate for California to infringe on its citizens’ First Amendment rights. Likewise, the Court rejected the notion that California had a compelling interest in “aiding parental authority.” The Court reasoned the state’s assertion ran contrary to the “rule that ‘only in relatively narrow and well-defined circumstances may government bar public dissemination of protected materials to [minors].’”

While there’s lots of screaming and yelling about how social media is bad for kids’ mental health, as we directly told Governor Cox, the evidence just doesn’t support the claim. The court seems to recognize that the claims are a lot of hot air as well. Indeed, Utah submitted the Surgeon General’s report as “proof,” which apparently they didn’t even read. As we noted, contrary to the media reporting on that report, it contained a very nuanced analysis that does not show any causal harms to kids from social media.

The judge absolutely noticed that.

First, though the court is sensitive to the mental health challenges many young people face, Defendants have not provided evidence establishing a clear, causal relationship between minors’ social media use and negative mental health impacts. It may very well be the case, as Defendants allege, that social media use is associated with serious mental health concerns including depression, anxiety, eating disorders, poor sleep, online harassment, low self-esteem, feelings of exclusion, and attention issues. But the record before the court contains only one report to that effect, and that report—a 2023 United States Surgeon General Advisory titled Social Media and Youth Mental Health—offers a much more nuanced view of the link between social media use and negative mental health impacts than that advanced by Defendants. For example, the Advisory affirms there are “ample indicators that social media can . . . have a profound risk of harm to the mental health and well-being of children and adolescents,” while emphasizing “robust independent safety analyses of the impact of social media on youth have not yet been conducted.” Likewise, the Advisory observes there is “broad agreement among the scientific community that social media has the potential to both benefit and harm children and adolescents,” depending on “their individual strengths and vulnerabilities, and . . . cultural, historical, and socio-economic factors.” The Advisory suggests social media can benefit minors by “providing positive community and connection with others who share identities, abilities, and interest,” “provid[ing] access to important information and creat[ing] a space for self-expression,” “promoting help-seeking behaviors[,] and serving as a gateway to initiating mental health care.”

The court is also not at all impressed by a declaration Utah provided by Jean Twenge, who is Jonathan Haidt’s partner-in-crime in pushing the baseless moral panic narrative about kids and social media.

Moreover, a review of Dr. Twenge’s Declaration suggests the majority of the reports she cites show only a correlative relationship between social media use and negative mental health impacts. Insofar as those reports support a causal relationship, Dr. Twenge’s Declaration suggests the nature of that relationship is limited to certain populations, such as teen girls, or certain mental health concerns, such as body image.

Then the court points out (thank you!) that kids have First Amendment rights too:

Second, Defendants’ position that the Act serves to protect uninformed minors from the “risks involved in providing personal information to social media companies and other users” ignores the basic First Amendment principle that “minors are entitled to a significant measure of First Amendment Protection.” The personal information a minor might choose to share on a social media service—the content they generate—is fundamentally their speech. And the Defendants may not justify an intrusion on the First Amendment rights of NetChoice’s members with, what amounts to, an intrusion on the constitutional rights of its members’ users…

Furthermore, Utah fails to meet the second prong of strict scrutiny, that the law be “narrowly tailored.” Because it’s not:

To begin, Defendants have not shown the Act is the least restrictive option for the State to accomplish its goals because they have not shown existing parental controls are an inadequate alternative to the Act. While Defendants present evidence suggesting parental controls are not in widespread use, their evidence does not establish parental tools are deficient. It only demonstrates parents are unaware of parental controls, do not know how to use parental controls, or simply do not care to use parental controls. Moreover, Defendants do not indicate the State has tried, or even considered, promoting “the diverse supervisory technologies that are widely available” as an alternative to the Act. The court is not unaware of young people’s technological prowess and potential to circumvent parental controls. But parents “control[] whether their minor children have access to Internet-connected devices in the first place,” and Defendants have not shown minors are so capable of evading parental controls that they are an insufficient alternative to the State infringing on protected speech.

Also, this:

Defendants do not offer any evidence that requiring social media companies to compel minors to push “play,” hit “next,” and log in for updates will meaningfully reduce the amount of time they spend on social media platforms. Nor do Defendants offer any evidence that these specific measures will alter the status quo to such an extent that mental health outcomes will improve and personal privacy risks will decrease

The court also points out that the law targets social media only, and not streaming or sports apps, but if it was truly harmful, then the law would have to target all of those other apps as well. Utah tried to claim that social media is somehow special and different than those other apps, but the judge notes that they provide no actual evidence in support of this claim.

But Defendants simply do not offer any evidence to support this distinction, and they only compare social media services to “entertainment services.” They do not account for the wider universe of platforms that utilize the features they take issue with, such as news sites and search engines. Accordingly, the Act’s regulatory scope “raises seriously doubts” about whether the Act actually advances the State’s purported interests.

The court also calls out that NetChoice member Dreamwidth, run by the trust & safety expert known best online as @rahaeli, proves how stupid and mistargeted this law is:

Finally, Defendants have not shown the Act is not seriously overinclusive, restricting more constitutionally protected speech than necessary to achieve the State’s goals. Specifically, Defendants have not identified why the Act’s scope is not constrained to social media platforms with significant populations of minor users, or social media platforms that use the addictive features fundamental to Defendants’ well-being and privacy concerns. NetChoice member Dreamwidth, “an open source social networking, content management, and personal publishing website,” provides a useful illustration of this disconnect. Although Dreamwidth fits the Central Coverage Definition’s concept of a “social media service,” Dreamwidth is distinguishable in form and purpose from the likes of traditional social media platforms—say, Facebook and X. Additionally, Dreamwidth does not actively promote its service to minors and does not use features such as seamless pagination and push notification.

The court then also notes that if the law went into effect, companies would face irreparable injury, given the potential fines in the law.

This harm is particularly concerning given the high cost of violating the Act—$2,500 per offense—and the State’s failure to promulgate administrative rules enabling social media companies to avail themselves of the Act’s safe harbor provision before it takes effect on October 1, 2024.

Some users also sued to block the law, and the court rejected that request as there is no clear redressable injury for those plaintiffs yet, and thus they have no standing to sue at this point. That could have changed after the law started to be enforced, but thanks to the injunction from the NetChoice part, the law is not going into effect.

Utah will undoubtedly waste more taxpayer money and appeal the case. But, so far, these laws keep failing in court across the country. And that’s great to see. Kids have First Amendment rights too, and one day, our lawmakers should start to recognize that fact.

Filed Under: 1st amendment, age assurance, age verification, content moderation, kids, protect the children, robert shelby, social media, utah
Companies: netchoice

Google Decides To Pull Up The Ladder On The Open Internet, Pushes For Unconstitutional Regulatory Proposals

from the not-cool dept

It’s pretty much the way of the world: beyond the basic enshittification story that has been so well told over the past year or so about how companies get worse and worse as they get more and more powerful, there’s also the well known concept of successful innovative companies “pulling up the ladder” behind them, using the regulatory process to make it impossible for other companies to follow their own path to success. We’ve talked about this in the sense of political entrepreneurship, which is when the main entrepreneurial effort is not to innovate in newer and better products for customers, but rather using the political system for personal gain and to prevent competitors from havng the same opportunities.

It happens all too frequently. And it’s been happening lately with the big internet companies, which relied on the open internet to become successful, but under massive pressure from regulators (and the media), keep shooting the open internet in the back, each time they can present themselves as “supportive” of some dumb regulatory regime. Facebook did it six years ago by supporting FOSTA wholeheartedly, which was the key tide shift that made the law viable in Congress.

And, now, it appears that Google is going down that same path. There have been hints here and there, such as when it mostly gave up the fight on net neutrality six years ago. However, Google had still appeared to be active in various fights to protect an open internet.

But, last week, Google took a big step towards pulling up the open internet ladder behind it, which got almost no coverage (and what coverage it got was misleading). And, for the life of me, I don’t understand why it chose to do this now. It’s one of the dumbest policy moves I’ve seen Google make in ages, and seems like a complete unforced error.

Last Monday, Google announced “a policy framework to protect children and teens online,” which was echoed by subsidiary YouTube, which posted basically the same thing, talking about it’s “principled approach for children and teenagers.” Both of these pushed not just a “principled approach” for companies to take, but a legislative model (and I hear that they’re out pushing “model bills” across legislatures as well).

The “legislative” model is, effectively, California’s Age Appropriate Design Code. Yes, the very law that was just declared unconstitutional just a few weeks before Google basically threw its weight behind the approach. What’s funny is that many, many people have (incorrectly) believed that Google was some sort of legal mastermind behind the NetChoice lawsuits challenging California’s law and other similar laws, when the reality appears to be that Google knows full well that it can handle the requirements of the law, but smaller competitors cannot. Google likes the law. It wants more of them, apparently.

The model includes “age assurance” (which is effectively age verification, though everyone pretends it’s not), greater parental surveillance, and the compliance nightmare of “impact assessments” (we talked about this nonsense in relation to the California law). Again, for many companies this is a good idea. But just because something is a good idea for companies to do does not mean that it should be mandated by law.

But that’s exactly what Google is pushing for here, even as a law that more or less mimics its framework was just found to be unconstitutional. While cynical people will say that maybe Google is supporting these policies hoping that they will continue to be found unconstitutional, I see little evidence to support that. Instead, it really sounds like Google is fully onboard with these kinds of duty of care regulations that will harm smaller competitors, but which Google can handle just fine.

It’s pulling up the ladder behind it.

And yet, the press coverage of this focused on the fact that this was being presented as an “alternative” to a full on ban for kids under 18 to be on social media. The Verge framed this as “Google asks Congress not to ban teens from social media,” leaving out that it was Google asking Congress to basically make it impossible for any site other than the largest, richest companies to be able to allow teens on social media. Same thing with TechCrunch, which framed it as Google lobbying against age verification.

But… it’s not? It’s basically lobbying for age verification, just in the guise of “age assurance,” which is effectively “age verification, but if you’re a smaller company you can get it wrong some undefined amount of time, until someone sues you.” I mean, what’s here is not “lobbying against age verification,” it’s basically saying “here’s how to require age verification.”

A good understanding of user age can help online services offer age-appropriate experiences. That said, any method to determine the age of users across services comes with tradeoffs, such as intruding on privacy interests, requiring more data collection and use, or restricting adult users’ access to important information and services. Where required, age assurance – which can range from declaration to inference and verification – should be risk-based, preserving users’ access to information and services, and respecting their privacy. Where legislation mandates age assurance, it should do so through a workable, interoperable standard that preserves the potential for anonymous or pseudonymous experiences. It should avoid requiring collection or processing of additional personal information, treating all users like children, or impinging on the ability of adults to access information. More data-intrusive methods (such as verification with “hard identifiers” like government IDs) should be limited to high-risk services (e.g., alcohol, gambling, or pornography) or age correction. Moreover, age assurance requirements should permit online services to explore and adapt to improved technological approaches. In particular, requirements should enable new, privacy-protective ways to ensure users are at least the required age before engaging in certain activities. Finally, because age assurance technologies are novel, imperfect, and evolving, requirements should provide reasonable protection from liability for good-faith efforts to develop and implement improved solutions in this space.

Much like Facebook caving on FOSTA, this is Google caving on age verification and other “duty of care” approaches to regulating the way kids have access to the internet. It’s pulling up the ladder behind itself, knowing that it was able to grow without having to take these steps, and making sure that none of the up-and-coming challenges to Google’s position will have the same freedom to do so.

And, for what? So that Google can go to regulators and say “look, we’re not against regulations, here’s our framework”? But Google has smart policy people. They have to know how this plays out in reality. Just as with FOSTA, it completely backfired on Facebook (and the open internet). This approach will do the same.

Not only will these laws inevitably be used against the companies themselves, they’ll also be weaponized and modified by policymakers who will make them even worse and even more dangerous, all while pointing to Google’s “blessing” of this approach as an endorsement.

For years, Google had been somewhat unique in continuing to fight for the open internet long after many other companies were switching over to ladder pulling. There were hints that Google was going down this path in the past, but with this policy framework, the company has now made it clear that it has no intention of being a friend to the open internet any more.

Filed Under: aadc, age appropriate design code, age assurance, age estimation, age verification, duty of care, for the children
Companies: google

Recent Case Highlights How Age Verification Laws May Directly Conflict With Biometric Privacy Laws

from the privacy-nightmare dept

California passed the California Age-Appropriate Design Code (AADC) nominally to protect children’s privacy, but at the same time, the AADC requires businesses to do an age “assurance” of all their users, children and adults alike. (Age “assurance” requires the business to distinguish children from adults, but the methodology to implement has many of the same characteristics as age verification–it just needs to be less precise for anyone who isn’t around the age of majority. I’ll treat the two as equivalent).

Doing age assurance/age verification raises substantial privacy risks. There are several ways of doing it, but the two primary options for quick results are (1) requiring consumers to submit government-issued documents, or (2) requiring consumers to submit to face scans that allow the algorithms to estimate the consumer’s age.

[Note: the differences between the two techniques may be legally inconsequential, because a service may want a confirmation that the person presenting the government documents is the person requesting access, which may essentially require a review of their face as well.]

But, are face scans really an option for age verification, or will it conflict with other privacy laws? In particular, face scanning seemingly directly conflict with biometric privacy laws, such as Illinois’ BIPA, which provide substantial restrictions on the collection, use, and retention of biometric information. (California’s Privacy Rights Act, CPRA, which the AADC supplements, also provides substantial protections for biometric information, which is classified as “sensitive” information). If a business purports to comply with the CA AADC by using face scans for age assurance, will that business simultaneously violate BIPA and other biometric privacy laws?

Today’s case doesn’t answer the question, but boy, it’s a red flag.

The court summarizes BIPA Sec. 15(b):

Section 15(b) of the Act deals with informed consent and prohibits private entities from collecting, capturing, or otherwise obtaining a person’s biometric identifiers or information without the person’s informed written consent. In other words, the collection of biometric identifiers or information is barred unless the collector first informs the person “in writing of the specific purpose and length of term for which the data is being collected, stored, and used” and “receives a written release” from the person or his legally authorized representative

Right away, you probably spotted three potential issues:

[Another possible tension is whether the business can retain face scans, even with BIPA consent, in order to show that each user was authenticated if challenged in the future, or if the face scans need to be deleted immediately, regardless of consent, to comply with privacy concerns in the age verification law.]

The primary defendant at issue, Binance, is a cryptocurrency exchange. (There are two Binance entities at issue here, BCM and BAM, but BCM drops out of the case for lack of jurisdiction). Users creating an account had to go through an identity verification process run by Jumio. The court describes the process:

Jumio’s software…required taking images of a user’s driver’s license or other photo identification, along with a “selfie” of the user to capture, analyze and compare biometric data of the user’s facial features….

During the account creation process, Kuklinski entered his personal information, including his name, birthdate and home address. He was also prompted to review and accept a “Self-Directed Custodial Account Agreement” for an entity known as Prime Trust, LLC that had no reference to collection of any biometric data. Kuklinski was then prompted to take a photograph of his driver’s license or other state identification card. After submitting his driver’s license photo, Kuklinski was prompted to take a photograph of his face with the language popping up “Capture your Face” and “Center your face in the frame and follow the on-screen instructions.” When his face was close enough and positioned correctly within the provided oval, the screen flashed “Scanning completed.” The next screen stated, “Analyzing biometric data,” “Uploading your documents”, and “This should only take a couple of seconds, depending on your network connectivity.”

Allegedly, none of the Binance or Jumio legal documents make the BIPA-required disclosures.

The court rejects Binance’s (BAM) motion to dismiss:

Jumio’s motion to dismiss also goes nowhere:

[The Sosa v. Onfido case also involved face-scanning identity verification for the service OfferUp. I wonder if the court would conduct the constitutional analysis differently if the defendant argued it had to engage with biometric information in order to comply with a different law, like the AADC?]

The court properly notes that this was only a motion to dismiss; defendants could still win later. Yet, this ruling highlights a few key issues:

1. If California requires age assurance and Illinois bans the primary methods of age assurance, there may be an inter-state conflict of laws that ought to support a Dormant Commerce Clause challenge. Plus, other states beyond Illinois have adopted their own unique biometric privacy laws, so interstate businesses are going to run into a state patchwork problem where it may be difficult or impossible to comply with all of the different laws.

2. More states are imposing age assurance/age verification requirements, including Utah and likely Arkansas. Often, like the CA AADC, those laws don’t specify how the assurance/verification should be done, leaving it to businesses to figure it out. But the legislatures’ silence on the process truly reflects their ignorance–the legislatures have no idea what technology will work to satisfy their requirements. It seems obvious that legislatures shouldn’t adopt requirements when they don’t know if and how they can be satisfied–or if satisfying the law will cause a different legal violation. Adopting a requirement that may be unfulfillable is legislative malpractice and ought to be evidence that the legislature lacked a rational basis for the law because they didn’t do even minimal diligence.

3. The clear tension between the CA AADC and biometric privacy is another indicator that the CA legislature lied to the public when it claimed the law would enhance children’s privacy.

4. I remain shocked by how many privacy policy experts and lawyers remain publicly quiet about age verification laws, or even tacitly support them, despite the OBVIOUS and SIGNIFICANT privacy problems they create. If you care about privacy, you should be extremely worried about the tsunami of age verification requirements being embraced around the country/globe. The invasiveness of those requirements could overwhelm and functionally moot most other efforts to protect consumer privacy.

5. Mandatory online age verification laws were universally struck down as unconstitutional in the 1990s and early 2000s. Legislatures are adopting them anyway, essentially ignoring the significant adverse caselaw. We are about to have a high-stakes society-wide reconciliation about this tension. Are online age verification requirements still unconstitutional 25 years later, or has something changed in the interim that makes them newly constitutional? The answer to that question will have an enormous impact on the future of the Internet. If the age verification requirements are now constitutional despite the legacy caselaw, legislatures will ensure that we are exposed to major privacy invasions everywhere we go on the Internet–and the countermoves of consumers and businesses will radically reshape the Internet, almost certainly for the worse.

Reposted with permission from Eric Goldman’s Technology & Marketing Law Blog.

Filed Under: aadc, ab 2273, age assurance, age verification, biometric, biometric privacy, bipa, california, illinois, privacy
Companies: binance, jumio