age verification – Techdirt (original) (raw)

Porn Is Protected Speech. Trump’s New Presidency Will Test That Sentiment. The Courts Can Uphold It.

from the we-are-in-for-a-rough-one dept

The die is cast. Donald Trump is heading back to the White House – a remarkable victory. But a lot of people who work in the adult entertainment industry are understandably scared. From the concerns for LGBTQ+ rights under the new Trump presidency to access to reproductive care at a state and national level, the next four years will be a significant challenge.

While all valid concerns that I share, it is the specter of the Heritage Foundation’s Project 2025 agenda that frightens me most. Previously, I’ve written across various outlets, like Techdirt, to address the “masculine policy” Trump and his new vice president, Sen. J.D. Vance of Ohio, and his allies envision to “make America great again.” Kevin Roberts, the president of the Heritage Foundation and the de facto head of Project 2025, a so-called “presidential transition project,” laid out the administration’s position on key culture war issues, such as access to online porn.

Roberts wrote in Mandate for Leadership: The Conservative Promise, Project 2025’s incendiary policy treatise nearly 1,000 pages long, that their camp believes “pornography” and “pornographers” should be imprisoned and stripped of First Amendment protection. Some folks have characterized Roberts’ words as simply rhetoric, but the past twelve months have verified a coordinated effort to significantly claw back the rights of all sex workers and adult industry firms.

This time around, Donald Trump has surrounded himself with outspoken Christian nationalists who want to demonize and then criminalize sexual expression that is otherwise protected by the First Amendment.

Russell Vought, one of the central architects of Project 2025, was caught on hidden camera a few months ago confirming that the efforts to ban porn will go through a so-called “back door” framework via a patchwork of state-level age verification laws and efforts in a newly GOP-controlled Senate. In addition to that, Vance has supported a porn ban. It was also under Donald Trump’s last term that the FOSTA-SESTA monstrosity that decimated legal sex work on the internet came to fruition. Imagine what will advance under Trump.

Expect to see a renewed effort to advance the Kids Online Safety Act or a beefier version of the bill. The current form of the bill, though supposedly reformed with the input of key LGBTQ+ groups, would make design code the law of the land in an affront to years of case law. As we’ve seen in California, age appropriate design mandates rarely hold up under strict scrutiny. But, relying on the history of FOSTA-SESTA, the Kids Online Safety Act in any form will be a legal flashpoint.

For example, when the Woodhull Freedom Foundation and other civil society organizations sued to render FOSTA unconstitutional, the appeals court in that case, though upholding it, affirmed that it’s overly broad and needs to be narrowly tailored to best address cases of online trafficking while respecting free speech rights. And it’s up to the courts to essentially hold a Trump presidency accountable for any sort of unilateral action taken against legally operating pornography platforms.

The conservative-leaning U.S. Supreme Court is, truly, the only check and balance on key freedom of speech issues moving forward when it comes to the next four years. And it begins in January. Oral arguments are scheduled in Free Speech Coalition et al v. Paxton for January 15, 2025. The American Civil Liberties Union took up the case due to expansive First Amendment implications associated with age verification laws like Texas House Bill (HB) 1181, which specifically targets online adult website operators with requirements to verify the age of users who navigate from local IP addresses. Existing case law suggests that a law like HB 1181 is unconstitutional and clashes with other rulings.

If the Free Speech Coalition is successful, this renders all other age verification laws that specifically target porn websites and require users to submit ID cards or other types of identity verification unconstitutional.

A win at SCOTUS for online speech could set the tone for a successful series of legal victories during Trump’s imperial presidency. That is all we can hope for, right?

Michael McGrady covers the tech and legal sides of the online porn business.

Filed Under: 1st amendment, age verification, donald trump, free speech, jd vance, kosa, porn, project 2025

Appeals Court: Utah’s Age Verification Is Currently Unchallengeable Because It’s A ‘Bounty’ Law

from the bounty-laws-suck dept

A year ago, we wrote about how a challenge brought by the Free Speech Coalition against Utah’s (obviously unconstitutional) age verification law couldn’t go forward because the district court noted (regrettably) that the structure of the law prevented FSC from challenging it before it went into effect.

The issue is that it’s a “bounty” law, in which the enforcement is not done by the state’s Attorney General or some other official, but through a private right of action. That means that any random person can sue someone who they believe is violating the law, and (if they’re right) receive a reward (bounty) down the line. I’m sure that won’t encourage all sorts of frivolous lawsuits from people looking to get free money and/or punish others, will it? These kinds of laws are increasingly popular across the political spectrum. Texas used it in its big anti-abortion law, while California did the same for its gun law.

These approaches are cynical and, well, bad. They’re basically vigilante laws, allowing citizens to sue just about anyone they think is violating the law, even if the violation has no impact whatsoever on the person suing. It’s just a recipe for creating a flood of often frivolous lawsuits. All to “own the libs/MAGA” depending on your state of choice.

One other “feature” of these kinds of bounty laws, though, is that they’re harder to challenge and possibly impossible to challenge before they go into effect. There’s some wonkiness in how challenges to laws work, which were highlighted in Supreme Court’s Moody decision earlier this year. Those focused on the difference between a “facial” challenge and an “as-applied” challenge. Without going too deep into the weeds, the facial challenge is “we’re challenging this whole law as completely unconstitutional with no redeeming value.” The “as applied” challenge would be challenging the law for how it’s applied.

Obviously, for the latter, you have to wait for the law to go into effect. Some might argue that this should be fine, but for many of these laws, there are very serious costs and decisions to be made to try to get into compliance. So if you can’t challenge them until they’re applied, many people and companies may have to waste a shit ton of money for a law that is probably unconstitutional, but which they can’t challenge until they’ve had to pay all those costs.

But, the way that you bring a facial challenge is that you sue the government official who would be enforcing the law. In lots of cases, that’s the Attorney General.

You may have caught up to where this is heading: if the law is “enforced” by a private right of action where random citizens get to bring a lawsuit, there’s no government official to challenge. Neat! Or, no, not neat. The other thing. Crazy. Bad.

So, yes, the Free Speech Coalition challenged Utah’s age verification law (just as it’s challenged other age verification laws elsewhere). But Utah’s age verification is a bounty law. There’s no enforcement by the state, just by random dipshits who want to “own the libs / try to get rich” or something. Thus, the lower court rejected the lawsuit, saying that FSC has no one to sue. At least the district court judge realized the situation sucked and explained that his hands were basically tied.

Free Speech Coalition appealed and, well, the 10th Circuit said, “yeah, hey, sorry, nothing we can do here.”

In sum, the Attorney General does not enforce or give effect to the Act and thus cannot be named as a defendant in this case under the Ex parte Young exception to Eleventh Amendment immunity. And because both defendants are immune from suit, we affirm the district court’s dismissal order without reaching the issues of ripeness and constitutional standing.

Judge Phillips has a dissent (in part) saying that, yes, Utah’s Attorney General can’t be sued to challenge this law, but he believes FSC should be able to sue the Commissioner of the Utah Department of Public Safety, who has some control over the age verification law:

As I see it, the Commissioner has a sufficient connection with SB 287’s enforcement to be sued under Ex parte Young’s exception to sovereign immunity. In Ex parte Young, the Court clarified that a state “officer must have some connection with the enforcement of the [challenged] act” to be exempt from sovereign immunity….

Here, the Commissioner gives effect to SB 287 through his oversight of the mDL program, which, pursuant to Utah’s Driver Licensing Act, directs the Driver License Division to “establish a process and system for an individual to obtain an electronic license certificate or identification card.”

To me, the dissent is compelling, but… it’s also the dissent.

Of course, as we speak, FSC has another case before the Supreme Court regarding the age verification law in Texas. If the Supreme Court follows historical Supreme Court precedent (a big fucking if with this crew of Justices, I know), then we could have a ruling next year making it clear that age verification laws are unconstitutional. That would be useful, but it would still be unclear who FSC can sue in Utah to get its law off the books.

Filed Under: 10th circuit, age verification, bounty law, facial challenge, immunity, standing, utah
Companies: free speech coalition

Heritage Foundation Admits KOSA Will Be Useful For Removing Pro-Abortion Content… If Trump Wins

from the saying-the-quiet-part-outloud dept

It’s no secret that Trump-administration-in-waiting at the Heritage Foundation supports KOSA because it thinks it will be useful in achieving some of the most extreme goals of Project 2025, a project Heritage created. Last year they came out and said that they supported KOSA because “keeping trans content away from children is protecting kids.”

Image

With KOSA stalled out in the House as many Republicans have rightly realized that it makes no sense and can be used to censor content they might support, as well as content they don’t support, Heritage Foundation has kicked off a new push to flip House Republicans. This comes the same week that supporters of KOSA brought a bunch of misguided parents to the Hill to push for the bill under the false premise that it would protect children. It won’t.

One of the things Heritage is passing around is a “myth vs. fact” sheet that is so batshit crazy that I had four different people in DC send me copies on Friday pointing out how crazy it is. I don’t have the time or patience to go through all of the nonsense in the document, but I want to call out a few things.

Heritage knows that KOSA can be used to suppress abortion info

Last year we wrote about the potential for KOSA to be used to suppress abortion info, and received some angry emails from Democrats who insisted that the bill was carefully written to avoid that. Heritage, though, makes it clear in this document that they fully expect if President Trump wins, that they can twist KOSA to silence pro-abortion content.

In a section pushing back on a claim that Democrats could use the “Kids Online Safety Council” created in the bill to push for pro-choice messaging, they say that this is “the status quo,” but as long as Trump wins, they’ll get to use this same mechanism to get anti-abortion people to control the council:

A Republican administration could fill the council with representatives who share pro-life values.

In other words, they know that whichever party is in the White House gets to control the council that will determine what content is considered safe for kids and which is not. That should automatically raise concerns for everyone, as it means whichever party they dislike, if in power, will have tremendous sway over what content will be allowed online.

Heritage knows that “online bullying and harassment” are too broadly defined, but want you to ignore that

Responding to the very real concern that KOSA doesn’t clearly define “online bullying” and “harassment of a minor” meaning that it would lead to “subjective interpretation and dubious claims,” Heritage jumps into a word salad that never actually responds to that concern beyond saying “but bullying is, like, really bad.”

The inclusion of “online bullying and harassment of a minor” is deliberate phrasing due to the indisputable impact of these behaviors on children and teens. According to a 2022 Pew survey. nearly half of American teens (46%) experienced bullying online. Multiple academic studies cited by the Cyberbullying Research Center and other research aggregators indicate that online bullying and harassment is related to a number of adverse psychological and physiological outcomes in young people, such as poor self-esteem, suicidal ideation, anger, substance use, and delinquency. Seventy-four percent of teens said in the same 2022 Pew survey that social media platforms aren’t doing enough to prevent digital bullying on their services.

The claim that the inclusion of “online bullying” and “harassment” could be weaponized by the FTC or attorneys general elides the legitimate, deleterious impact of online bullying and harassment on young people. These are genuine problems that impact millions of American minors. Overly broad interpretations of digital “bullying” and “harassment” can be prevented by clearly defining both terms. These definitions, plainly articulated and scoped, could limit the legislation’s reach to actions that legitimately threaten the physical safety and mental health of American youth.

Except, um, they’re not clearly defined, not clearly scoped, and are wide open to abuse.

Furthermore, if you actually look at the Pew Survey from 2022, it’s not quite as horrifying as Heritage makes it out to be. That 46% of kids who experienced “bullying” is actually… mostly kids who experienced “name calling.” To be honest, I’m kind of surprised the number is not higher. Kids in school get called names all the time and did so prior to the internet as well. We don’t need a law to deal with that.

Indeed, more serious forms of bullying, such as stuff having to do with explicit images, is way further down the list:

Image

Indeed, this study kinda makes the point that Heritage is trying to deny. That “cyberbullying” is vague and not well-defined, and people use it to include all sorts of things from “name calling” to sharing of explicit imagery or threats. Name-calling can be an issue, but it’s not one that the federal government needs to be involved in. It’s the type of issue for parents and schools to deal with locally.

Later in the document, Heritage offers up its own definitions of “online bullying” and “harassment” that it hopes the House will add. Notably, those definitions, which included that the activity must happen “consistently and pervasively” does not at all match up with what the Pew study found and reported. They present no evidence about how widespread the activity is that would meet Heritage’s definition.

Heritage knows that KOSA will lead to protected speech being removed, but claims it’s okay because platforms already moderate

This part is Heritage (1) not understanding the First Amendment, and (2) telling on themselves. Responding to the claim that KOSA will have a chilling effect on free speech, encouraging platforms to remove certain disfavored content, they admit:

Big Tech platforms already censor and suppress conservative views on a massive scale. KOSA wouldn’t cause that but it would oblige platforms to take kids’ safety into account.

So, first of all, no, “Big Tech platforms” do not “already censor and suppress conservative views on a massive scale.” That’s a myth. It has been debunked so many times. Indeed, over and over what has been found is that the platforms bend over backwards to allow conservatives to break the rules without punishment to avoid the false claim of censoring conservative viewpoints.

But more to the point, this talking point does not actually respond to the claim. The fact that social media sites already have their own moderation rules and policies and enforcement is an entirely different thing than having to craft policies to comply with a law to “protect the children.”

This is, in effect, Heritage admitting that KOSA violates the First Amendment, but saying “it’s fine because social media already moderates.” That’s a very confused understanding of the First Amendment. The First Amendment allows social media companies to moderate how they see fit. If they are moderating to comply with the law, then that violates the First Amendment. So here you have Heritage saying “it’s okay to violate the First Amendment, because of this other thing which isn’t even happening, and wouldn’t violate the First Amendment if it did happen.”

Either the people at Heritage who wrote this don’t understand anything about the First Amendment or they’re just hoping the people they send this to are too dumb to understand the First Amendment.

Heritage thinks age verification for social media is fine, because of the MPAA rating system

Just to make it doubly clear that Heritage has no fucking clue about the First Amendment, in a section defending age verification (hilariously right after they claim KOSA doesn’t require age verification), they say that age verification is totally constitutional. That’s wrong. The Supreme Court ruled on this 20 years ago Ashcroft v. ACLU.

Then they claim that the MPAA rating system proves its legal:

Age verification reliably mitigates such harms in analog contexts like bars, adult venues, R-rated films, and online gambling.

Of course, only one of those is really about speech: movies. And, notably, the MPAA rating system is totally voluntary and not backed up by law. That’s because everyone knows if it was backed up by law, it would be unconstitutional.

There’s another big Supreme Court case on that question. California tried to pass a law mandating similar age ratings for video games, and the Supreme Court threw it out as unconstitutional in Brown v. Entertainment Merchants Association fourteen years ago.

You’d think Heritage would be familiar with that case, given that it was written by their hero Justice Scalia. Justice Alito wrote a concurrence in that case, in which he went on one of his preferred “history trips” insisting that you can’t regulate access to violence because there is no long history of the US censoring violent content.

The same would be true of basically every category of content KOSA looks to restrict.

In the end, Heritage here is trying to walk a very fine line. They’re trying to signal to the GOP that this bill is still useful for the kinds of culture war nonsense they want to propagate, silencing LGBTQ+ and pro-abortion content. But they can’t say that part out loud. So instead this document makes it clear that “wink-wink, nudge-nudge, if Trump wins, we get our people in their to define this stuff.”

All this document really shows is why no Democrat should ever be seen to support KOSA. And, any Republican who can read between the lines should see why they should be equally worried about this bill in the hands of a Democratic administration.

No matter who is in power, KOSA is a dangerous, likely unconstitutional attack on free expression.

Filed Under: 1st amendment, abortion, age verification, harassment, kosa, online bullying
Companies: heritage foundation

Utah’s ‘Protect The Kids Online!’ Law Rejected By Court

from the utah-does-it-again dept

Over the last few years, politicians in Utah have been itching to pass terrible internet legislation. Some of you may forget that in the earlier part of the century, Utah became somewhat famous for passing absolutely terrible internet laws that the courts then had to clean up. In the last few years, it’s felt like other states have passed Utah, and maybe its lawmakers were getting a bit jealous in losing their “we pass batshit crazy unconstitutional internet laws” crown.

So, two years ago, they started pushing a new round of such laws. Even as they were slamming social media as dangerous and evil, Utah Governor Spencer Cox proudly signed the new law, streaming it on all the social media sites he insisted were dangerous. When Utah was sued by NetChoice, the state realized that the original law was going to get laughed out of court and they asked for a do-over, promising that they were going to repeal and replace the law with something better. The new law changed basically nothing, though, and an updated lawsuit (again by NetChoice) was filed.

The law required social media companies to engage in “age assurance” (which is just a friendlier name for age verification, but still a privacy nightmare) and then restrict access to certain types of content and features for “minor accounts.”

Cox also somewhat famously got into a fight on ExTwitter with First Amendment lawyer Ari Cohn. When Cohn pointed out that the law clearly violates the First Amendment, Cox insisted: “Can’t wait to fight this lawsuit. You are wrong and I’m excited to prove it.” When Cohn continued to point out the law’s flaws, Cox responded “See you in court.”

The Twitter exchange between Cohn and Cox as described above with Cox concluding "see you in court."

In case you’re wondering how the lawsuit is going, last night Ari got to post an update:

Ari Cohn quote tweeting Cox's "See you in court" tweet, and saying "ope" with a screenshot of the conclusion from the court enjoining the law as unconstitutional.

The law is enjoined. The court found it to likely be unconstitutional, just as Ari and plenty of other First Amendment experts expected. This case has been a bit of a roller coaster, though. A month and a half ago, the court said that Section 230 preemption did not apply to the case. The analysis on that made no sense. As we just saw, a court in Texas threw out a very similar law and said that since it tried to limit how sites could moderate content, it was preempted by Section 230. But, for a bunch of dumb reasons, the judge here, Robert Shelby, argued that the law wasn’t actually trying to impact content moderation (even though it clearly was).

But, that was only part of the case. The latest ruling found that the law almost certainly violates the First Amendment anyway:

NetChoice’s argument is persuasive. As a preliminary matter, there is no dispute the Act implicates social media companies’ First Amendment rights. The speech at issue in this case— the speech social media companies engage in when they make decisions about how to construct and operate their platforms—is protected speech. The Supreme Court has long held that “[a]n entity ‘exercis[ing] editorial discretion in the selection and presentation’ of content is ‘engage[d] in speech activity’” protected by the First Amendment. And this July, in Moody v. NetChoice, LLC, the Court affirmed these First Amendment principles “do not go on leave when social media are involved.” Indeed, the Court reasoned that in “making millions of . . . decisions each day” about “what third-party speech to display and how to display it,” social media companies “produce their own distinctive compilations of expression.”

Furthermore, following on the Supreme Court’s ruling earlier this year in Moody about whether or not the entire law can be struck down on a “facial” challenge, the court says “yes” (this issue has recently limited similar rulings in Texas and California):

NetChoice has shown it is substantially likely to succeed on its claim the Act has “no constitutionally permissible application” because it imposes content-based restrictions on social media companies’ speech, such restrictions require Defendants to show the Act satisfies strict scrutiny, and Defendants have failed to do so.

Utah tries to argue that this law is not about speech and content, but rather about conduct and “structure,” as California did in challenges to its “kids code” law. The court is not buying it:

Defendants respond that the Definition contemplates a social media service’s “structure, not subject matter.” However, Defendants’ argument emphasizes the elements of the Central Coverage Definition that relate to “registering accounts, connecting accounts, [and] displaying user-generated content” while ignoring the “interact socially” requirement. And unlike the premises-based distinction at issue in City of Austin, the social interaction-based distinction does not appear designed to inform the application of otherwise content-neutral restrictions. It is a distinction that singles out social media companies based on the “social” subject matter “of the material [they] disseminate[].” Or as Defendants put it, companies offering services “where interactive, immersive, social interaction is the whole point.”

The court notes that Utah seems to misunderstand the issue, and finds the idea that this law is content neutral to be laughable:

Defendants also respond that the Central Coverage Definition is content neutral because it does not prevent “minor account holders and other users they connect with [from] discuss[ing] any topic they wish.” But in this respect, Defendants appear to misunderstand the essential nature of NetChoice’s position. The foundation of NetChoice’s First Amendment challenge is not that the Central Coverage Definition restricts minor social media users’ ability to, for example, share political opinions. Rather, the focus of NetChoice’s challenge is that the Central Coverage Definition restricts social media companies’ abilities to collage user-generated speech into their “own distinctive compilation[s] of expression.”

Moreover, because NetChoice has shown the Central Coverage Definition facially distinguishes between “social” speech and other forms of speech, it is substantially likely the Definition is content based and the court need not consider whether NetChoice has “point[ed] to any message with which the State has expressed disagreement through enactment of the Act.”

Given all that, strict scrutiny applies, and there’s no way this law passes strict scrutiny. The first prong of the test is whether or not there’s a compelling state interest in passing such a law. And even though it’s about the moral panic of kids on the internet, the court says there’s a higher bar here. Because we’ve done this before, with California trying to regulate video games, which the Supreme Court struck down fourteen years ago:

To satisfy this exacting standard, Defendants must “specifically identify an ‘actual problem’ in need of solving.” In Brown v. Entertainment Merchants Association, for example, the Supreme Court held California failed to demonstrate a compelling government interest in protecting minors from violent video games because it lacked evidence showing a causal “connection between exposure to violent video games and harmful effects on children.” Reviewing psychological studies California cited in defense of its position, the Court reasoned research “show[ed] at best some correlation between exposure to violent entertainment” and “real-world effects.” This “ambiguous proof” did not establish violent videogames were such a problem that it was appropriate for California to infringe on its citizens’ First Amendment rights. Likewise, the Court rejected the notion that California had a compelling interest in “aiding parental authority.” The Court reasoned the state’s assertion ran contrary to the “rule that ‘only in relatively narrow and well-defined circumstances may government bar public dissemination of protected materials to [minors].’”

While there’s lots of screaming and yelling about how social media is bad for kids’ mental health, as we directly told Governor Cox, the evidence just doesn’t support the claim. The court seems to recognize that the claims are a lot of hot air as well. Indeed, Utah submitted the Surgeon General’s report as “proof,” which apparently they didn’t even read. As we noted, contrary to the media reporting on that report, it contained a very nuanced analysis that does not show any causal harms to kids from social media.

The judge absolutely noticed that.

First, though the court is sensitive to the mental health challenges many young people face, Defendants have not provided evidence establishing a clear, causal relationship between minors’ social media use and negative mental health impacts. It may very well be the case, as Defendants allege, that social media use is associated with serious mental health concerns including depression, anxiety, eating disorders, poor sleep, online harassment, low self-esteem, feelings of exclusion, and attention issues. But the record before the court contains only one report to that effect, and that report—a 2023 United States Surgeon General Advisory titled Social Media and Youth Mental Health—offers a much more nuanced view of the link between social media use and negative mental health impacts than that advanced by Defendants. For example, the Advisory affirms there are “ample indicators that social media can . . . have a profound risk of harm to the mental health and well-being of children and adolescents,” while emphasizing “robust independent safety analyses of the impact of social media on youth have not yet been conducted.” Likewise, the Advisory observes there is “broad agreement among the scientific community that social media has the potential to both benefit and harm children and adolescents,” depending on “their individual strengths and vulnerabilities, and . . . cultural, historical, and socio-economic factors.” The Advisory suggests social media can benefit minors by “providing positive community and connection with others who share identities, abilities, and interest,” “provid[ing] access to important information and creat[ing] a space for self-expression,” “promoting help-seeking behaviors[,] and serving as a gateway to initiating mental health care.”

The court is also not at all impressed by a declaration Utah provided by Jean Twenge, who is Jonathan Haidt’s partner-in-crime in pushing the baseless moral panic narrative about kids and social media.

Moreover, a review of Dr. Twenge’s Declaration suggests the majority of the reports she cites show only a correlative relationship between social media use and negative mental health impacts. Insofar as those reports support a causal relationship, Dr. Twenge’s Declaration suggests the nature of that relationship is limited to certain populations, such as teen girls, or certain mental health concerns, such as body image.

Then the court points out (thank you!) that kids have First Amendment rights too:

Second, Defendants’ position that the Act serves to protect uninformed minors from the “risks involved in providing personal information to social media companies and other users” ignores the basic First Amendment principle that “minors are entitled to a significant measure of First Amendment Protection.” The personal information a minor might choose to share on a social media service—the content they generate—is fundamentally their speech. And the Defendants may not justify an intrusion on the First Amendment rights of NetChoice’s members with, what amounts to, an intrusion on the constitutional rights of its members’ users…

Furthermore, Utah fails to meet the second prong of strict scrutiny, that the law be “narrowly tailored.” Because it’s not:

To begin, Defendants have not shown the Act is the least restrictive option for the State to accomplish its goals because they have not shown existing parental controls are an inadequate alternative to the Act. While Defendants present evidence suggesting parental controls are not in widespread use, their evidence does not establish parental tools are deficient. It only demonstrates parents are unaware of parental controls, do not know how to use parental controls, or simply do not care to use parental controls. Moreover, Defendants do not indicate the State has tried, or even considered, promoting “the diverse supervisory technologies that are widely available” as an alternative to the Act. The court is not unaware of young people’s technological prowess and potential to circumvent parental controls. But parents “control[] whether their minor children have access to Internet-connected devices in the first place,” and Defendants have not shown minors are so capable of evading parental controls that they are an insufficient alternative to the State infringing on protected speech.

Also, this:

Defendants do not offer any evidence that requiring social media companies to compel minors to push “play,” hit “next,” and log in for updates will meaningfully reduce the amount of time they spend on social media platforms. Nor do Defendants offer any evidence that these specific measures will alter the status quo to such an extent that mental health outcomes will improve and personal privacy risks will decrease

The court also points out that the law targets social media only, and not streaming or sports apps, but if it was truly harmful, then the law would have to target all of those other apps as well. Utah tried to claim that social media is somehow special and different than those other apps, but the judge notes that they provide no actual evidence in support of this claim.

But Defendants simply do not offer any evidence to support this distinction, and they only compare social media services to “entertainment services.” They do not account for the wider universe of platforms that utilize the features they take issue with, such as news sites and search engines. Accordingly, the Act’s regulatory scope “raises seriously doubts” about whether the Act actually advances the State’s purported interests.

The court also calls out that NetChoice member Dreamwidth, run by the trust & safety expert known best online as @rahaeli, proves how stupid and mistargeted this law is:

Finally, Defendants have not shown the Act is not seriously overinclusive, restricting more constitutionally protected speech than necessary to achieve the State’s goals. Specifically, Defendants have not identified why the Act’s scope is not constrained to social media platforms with significant populations of minor users, or social media platforms that use the addictive features fundamental to Defendants’ well-being and privacy concerns. NetChoice member Dreamwidth, “an open source social networking, content management, and personal publishing website,” provides a useful illustration of this disconnect. Although Dreamwidth fits the Central Coverage Definition’s concept of a “social media service,” Dreamwidth is distinguishable in form and purpose from the likes of traditional social media platforms—say, Facebook and X. Additionally, Dreamwidth does not actively promote its service to minors and does not use features such as seamless pagination and push notification.

The court then also notes that if the law went into effect, companies would face irreparable injury, given the potential fines in the law.

This harm is particularly concerning given the high cost of violating the Act—$2,500 per offense—and the State’s failure to promulgate administrative rules enabling social media companies to avail themselves of the Act’s safe harbor provision before it takes effect on October 1, 2024.

Some users also sued to block the law, and the court rejected that request as there is no clear redressable injury for those plaintiffs yet, and thus they have no standing to sue at this point. That could have changed after the law started to be enforced, but thanks to the injunction from the NetChoice part, the law is not going into effect.

Utah will undoubtedly waste more taxpayer money and appeal the case. But, so far, these laws keep failing in court across the country. And that’s great to see. Kids have First Amendment rights too, and one day, our lawmakers should start to recognize that fact.

Filed Under: 1st amendment, age assurance, age verification, content moderation, kids, protect the children, robert shelby, social media, utah
Companies: netchoice

Aussie Gov’t: Age Verification Went From ‘Privacy Nightmare’ To Mandatory In A Year

from the topsy-turvy-down-under dept

Over the last few years, it’s felt like the age verification debate has gotten progressively stupider. People keep insisting that it must be necessary, and when others point out that there are serious privacy and security concerns that will likely make things worse, not better, we’re told that we have to do it anyway.

Let’s head down under for just one example. Almost exactly a year ago, the Australian government released a report on age verification, noting that the technology was simply a privacy and security nightmare. At the time, the government felt that mandating such a technology was too dangerous:

“It is clear from the roadmap at present, each type of age verification or age assurance technology comes with its own privacy, security, effectiveness or implementation issues,” the government’s response to the roadmap said.

The technology must work effectively without circumvention, must be able to be applied to pornography hosted outside Australia, and not introduce the risk to personal information for adults who choose to access legal pornography, the government stated.

“The roadmap makes clear that a decision to mandate age assurance is not yet ready to be taken.”

That’s why we were a bit surprised earlier this year when the government announced a plan to run a pilot program for age verification. However, as we pointed out at the time, just hours after the announcement of that pilot program, it was revealed that a mandated verification database used for bars and clubs in Australia was breached, revealing sensitive data on over 1 million people.

You would think that might make the government pause and think more deeply about this. But apparently that’s not the way they work down under. The government is now exploring plans to officially age-gate social media.

The federal government could soon have the power to ban children from social media platforms, promising legislation to impose an age limit before the next election.

But the government will not reveal any age limit for social media until a trial of age-verification technology is complete.

The article is full of extremely silly quotes:

Prime Minister Anthony Albanese said social media was taking children away from real-life experiences with friends and family.

“Parents are worried sick about this,” he said.

“We know they’re working without a map. No generation has faced this challenge before.

“The safety and mental and physical health of our young people is paramount.

“Parents want their kids off their phones and on the footy field. So do I.”

This is ridiculous on all sorts of levels. Many families stay in touch via social media, so taking kids away from it may actually cut off their ability to connect with “friends and families.”

Yes, there are cases where some kids cannot put down phones and where obvious issues must be dealt with, as we’ve discussed before. But the idea that this is a universal, across-the-board problem is nonsense.

Hell, a recent study found that more people appeared to be going into the great outdoors because of seeing it glorified on social media. Some are worried that people are too focused on the great outdoors because it’s being overly glorified on social media.

Again, there’s a lot of nuance in the research that suggests this is not a simple issue of “if we cut kids off of social media, they’ll spend more time outside.” Some kids use social media to build up their social life which can lead to more outdoor activity, while some don’t. It’s not nearly as simple as saying that they’ll magically go outdoors and play sports if they don’t have social media.

Then you combine that with the fact that the Australian government knows that age verification is inherently unsafe, and this whole plan seems especially dangerous.

But, of course, politicians love to play into the latest moral panic.

South Australian Premier Peter Malinauskas said getting kids off social media required urgent leadership.

“The evidence shows early access to addictive social media is causing our kids harm,” he said.

“This is no different to cigarettes or alcohol. When a product or service hurts children, governments must act.”

Except, it’s extremely different than cigarettes and alcohol, both of which are actually consumed by the body and insert literal toxins into the bloodstream. Social media is speech. Speech can influence, but you can’t call it inherently a toxin or inherently good or bad.

The statement that “addictive social media is causing our kids harm” is literally false. The evidence is way more nuanced, and there remain no studies showing an actual causal relationship here. As we’ve discussed at length (backed up by multiple studies), if anything the relationship may go the other way, with kids who are already dealing with mental health problems resorting to spending more time on social media because of failures by the government to provide resources to help.

In other words, this rush to ban social media for kids is, in effect, an attempt by government officials to cover up their own failures.

The government could be doing all sorts of things to actually help kids. It could invest in better digital literacy, training kids how to use the technology more appropriately. It could provide better mental health resources for people of all ages. It could provide more space and opportunities for kids to freely spend time outdoors. These are all good uses of the government’s powers that tackle the issues they claim matter here.

Surveilling kids and collecting private data on them which everyone knows will eventually leak, and then banning them from spaces that many, many kids have said make their lives and mental health better, seems unlikely to help.

Of course, it’s only at the very end of the article linked above that the reporters include a few quotes from academics pointing out that age verification could create privacy and security problems, and that such laws could backfire. But the article never even mentions that the claims made by politicians are also full of shit.

Filed Under: age verification, anthony albanese, australia, kids, mental health, moral panic, peter malinauskas, privacy, security, social media

Seventh Circuit Allows Indiana’s Controversial Age Verification Law, For Now

from the age-verification-is-still-unconstitutional dept

The U.S. Seventh Circuit Court of Appeals has allowed Indiana’s age verification law to go into effect — even as the Supreme Court has suggested a similar law in Texas might be unconstitutional. The Seventh Circuit panel handed down this ruling, letting the law go into effect just weeks after the U.S. Supreme Court decided to take up a case challenging Texas’s nearly identical age verification law.

The high court just granted cert in that case, Free Speech Coalition et al v. Paxton. Free Speech Coalition (FSC), the trade group representing the adult content industry, sued Texas Attorney General Ken Paxton in a bid to block Texas’ HB 1181 law, which mandated age verification for adult content sites.

That law is quite similar to the one Indiana passed. In the Texas case, a split panel at the Fifth Circuit found HB 1181 to be constitutional, despite the Texas federal district court ruling that existing precedent made it clear that age verification mandates were unconstitutional. The Supreme Court agreed to review the 5th Circuit’s decision allowing the law to go into effect, but in the process they declined to block HB 1181 while litigation played out.

The ruling allowing the Indiana’s law to go into effect is quite peculiar. FSC sued the state of Indiana to block enforcement of Senate Bill (SB) 17, their age verification law. Judge Richard L. Young for the Southern District of Indiana ruled SB 17 “facially” unconstitutional and issued a preliminary injunction for the plaintiffs, blocking the law from taking effect. This ruling followed on many other rulings around the country rejecting age verification mandates as unconstitutional.

Indiana Attorney General Todd Rokita appealed the injunction to the Seventh Circuit. There, the majority opinion deferred to the Supreme Court’s allowance of Texas HB 1181 to stay in effect through the course of FSC v. Paxton as justification for the Indiana law to be enforced.

In other words, Indiana should be able to enforce its own law as well because SCOTUS is allowing Texas to enforce its law for now. The judges did this as a means to maintain “judicial efficiency.” They also put the case regarding the Indiana law on hold until the Supreme Court rules on Texas’ law.

While the judges concurred on staying the injunction against SB 17, Seventh Circuit Judge Illana D. Rovner dissented in part. Judge Rovner wasn’t convinced by Indiana’s argument that it was in the state’s interest to enforce the law, per the horrid precedent set by the Fifth Circuit, when it found age verification rules specifically targeting porn websites to be constitutional. Judge Rovner characterized these types of laws as potentially “burdensome.”

Consider this portion of Judge Rovner’s dissenting opinion:

“[We] impose a cost on the businesses and individuals that have to comply with the Act, and curtail their First Amendment rights, based solely on an unreasoned stay denial even though the only court decision as to this Indiana statute held that the burden is unconstitutional. And such a precedent could have drastic consequences in a future case where the economic burden of a statute was even greater by subjecting the parties to that burden while awaiting the Supreme Court’s decision without ever considering the relative harms to the parties.”

All three of the Seventh Circuit judges – Judges Frank H. Easterbrook, Amy J. St. Eve, and Rovner – determined SB 17 to be “functionally identical” to HB 1181. And since HB 1181 is already being enforced and the Supreme Court allowed it to stay in force during the ongoing litigation, it was deemed fair to follow this ruling and allow SB 17 to go into force as well. Rovner does note it is troubling they granted the motion to allow SB 17 to be enforced without ever considering the harm an age verification mandate would have on the suing platforms and users.

“Here…the district court held that the statute was unconstitutional, and granted a preliminary injunction, enjoining it on First Amendment grounds and denying the motion to stay that injunction. The result, of course, is that the Indiana statute has never been in force, unlike the Texas statute. We have not yet had the opportunity to consider the appeal on the merits, and therefore, the current state in our case is that the plaintiffs have not been required to comply with the burdensome requirements of the Act.”

The Seventh Circuit declined to rule on the constitutionality of SB 17, unlike the Fifth Circuit in the case of Texas HB 1181. It only looked at whether or not the law could go into effect now or should be stayed.

Rovner rightly points out that the Supreme Court’s decision to grant cert in the Paxton case should cause some more careful thinking by the Seventh Circuit. It at least indicates that some at the Supreme Court feel the case in the Fifth was decided incorrectly.

One could as easily argue that the Court’s grant of certiorari signals a concern with the Fifth Circuit’s determination of constitutionality, and favors leaving the district court’s determination in place.

When reviewing these laws, it’s reasonable to think SCOTUS might believe that the Fifth Circuit erred in using rational basis (or, similarly, that it erred in how it applied that scrutiny). That would explain why it took the case. And thus, Rovner is correct that it’s a bit odd for the Seventh Circuit to effectively bless the Fifth Circuit’s approach right at the very moment the Supreme Court had indicated it may have problems.

Rovner also points out that the majority’s decision in the Seventh Circuit claims to be in favor of keeping the “status quo,” but that makes no sense, given that Indiana’s law has never been in force, and this move puts it into force:

Here, in contrast, the district court held that the statute was unconstitutional, and granted a preliminary injunction, enjoining it on First Amendment grounds and denying the motion to stay that injunction. The result, of course, is that the Indiana statute has never been in force, unlike the Texas statute. We have not yet had the opportunity to consider the appeal on the merits, and therefore, the current state in our case is that the plaintiffs have not been required to comply with the burdensome requirements of the Act. If we were to alter that status quo, we should do so only by considering the stay on the merits and determining that a stay is appropriate under that analysis

Either way, for now the law is in effect, and Rokita can go after adult content sites for not making use of age verification while we wait for the Supreme Court to determine if the Fifth Circuit was correct in the first place.

Michael McGrady covers the legal and tech side of the online porn business, among other topics.

Filed Under: 5th circuit, 7th circuit, age verification, indiana, ken paxton, supreme court, texas, todd rokita
Companies: free speech coalition

Age Verification Laws Are Just A Path Towards A Full Ban On Porn, Proponent Admits

from the outing-themselves-as-the-censors-they-want-to-be dept

It’s never about the children. Supporters of age verification laws, book bans, drag show bans, and abortion bans always claim they’re doing these things to protect children. But it’s always just about themselves. They want to impose their morality on other adults. That’s all there is to it.

Abortion bans are just a way to strip women of bodily autonomy. If it was really about cherishing children and new lives, these same legislators wouldn’t be routinely stripping school lunch programs of funding, introducing onerous means testing to government aid programs, and generally treating children as a presumptive drain on society.

The same goes for book bans. They claim they want to prevent children from accessing inappropriate material. But you can only prevent children from accessing it by removing it entirely from public libraries, which means even adults will no longer be able to read these books.

The laws targeting drag shows aren’t about children. They’re about punishing certain people for being the way they are — people whose mere existence seems to be considered wholly unacceptable by bigots with far too much power.

The slew of age verification laws introduced in recent years are being shot down by courts almost as swiftly as they’re enacted. And for good reason. Age verification laws are unconstitutional. And they’re certainly not being enacted to prevent children from accessing porn.

Of course, none of the people pushing this kind of legislation will ever openly admit their reasons for doing so. But they will admit it to people they think are like-minded. All it takes is a tiny bit of subterfuge to tease these admissions out of activist groups that want to control what content adults have access to — something that’s barely hidden by their “for the children” facade.

As Shawn Musgrave reports for The Intercept, a couple of people managed to coax this admission out of a former Trump official simply by pretending they were there to give his pet project a bunch of cash.

“I actually never talk about our porn agenda,” said Russell Vought, a former top Trump administration official, in late July. Vought was chatting with two men he thought were potential donors to his right-wing think tank, the Center for Renewing America.

For the last three years, Vought and the CRA have been pushing laws that require porn websites to verify their visitors are not minors, on the argument that children need to be protected from smut. Dozens of states have enacted or considered these “age verification laws,” many of them modeled on the CRA’s proposals.

[…]

But in a wide-ranging, covertly recorded conversation with two undercover operatives — a paid actor and a reporter for the British journalism nonprofit Centre for Climate Reporting — Vought let them in on a thinly veiled secret: These age verification laws are a pretext for restricting access to porn more broadly.

“Thinly veiled” is right. While it’s somewhat amusing Vought was taken in so easily and was immediately willing to say the quiet part loud when he thought cash was on the line, he’s made his antipathy towards porn exceedingly clear. As Musgrave notes in his article, Vought’s contribution to Project 2025 — a right-wing masturbatory fantasy masquerading as policy proposals should Trump take office again — almost immediately veers into the sort of territory normally only explored by dictators and autocrats who relied heavily on domestic surveillance, forced labor camps, and torture to rein in those who disagreed with their moral stances.

Pornography, manifested today in the omnipresent propagation of transgender ideology and sexualization of children, for instance, is not a political Gordian knot inextricably binding up disparate claims about free speech, property rights, sexual liberation, and child welfare. It has no claim to First Amendment protection. Its purveyors are child predators and misogynistic exploiters of women. Their product is as addictive as any illicit drug and as psychologically destructive as any crime. Pornography should be outlawed. The people who produce and distribute it should be imprisoned. Educators and public librarians who purvey it should be classed as registered sex offenders. And telecommunications and technology firms that facilitate its spread should be shuttered.

Perhaps the most surprising part of this paragraph (and, indeed, a lot of Vought’s contribution to Project 2025) is that it isn’t written in all caps with a “follow me on xTwitter” link attached. These are not the words of a hinged person. They are the opposite — the ravings of a man in desperate need of a competent re-hinging service.

And he’s wrong about everything in this paragraph, especially his assertion that pornography is not a First Amendment issue. It is. That’s why so many of these laws are getting rejected by federal courts. The rest is hyperbole that pretends it’s just bold, common sense assertions. I would like to hear more about the epidemic of porn overdoses that’s leaving children parentless and overloading our health system. And who can forget the recent killing sprees of the Sinoloa Porn Cartel, which has led to federal intervention from the Mexican government?

But the most horrifying part is Vought’s desire to imprison people for producing porn and converting librarians to registered sex offenders just because their libraries carry some content that personally offends his sensibilities.

These are the words and actions of people who strongly support fascism so long as they’re part of the ruling party. They don’t care about kids, America, democracy, or the Constitution. They want a nation of followers and the power to punish anyone who steps out of line. The Center for Renewing America is only one of several groups with the same ideology and the same censorial urges. These are dangerous people, but their ideas and policy proposals are now so common it’s almost impossible to classify it as “extremist.” There are a lot of Americans who would rather see the nation destroyed than have to, at minimum, tolerate people and ideas they don’t personally like. Their ugliness needs to be dragged out into the open as often as possible, if only to force them to confront the things they’ve actually said and done.

Filed Under: 1st amendment, age verification, censorship, for the children, free speech, porn ban, project 2025, russell vought
Companies: center for renewing america

Age-Gating Access To Online Porn Is Unconstitutional

from the we've-done-this-already dept

Texas is one of eight states that have enacted laws that force adults to prove their age before accessing porn sites. Soon it will try to persuade the Supreme Court that its law doesn’t violate the First Amendment.

Good luck with that.

These laws are unconstitutional: They deny adults the well-established right to access constitutionally protected speech.

Texas’ H.B. 1181 forces any website made up of one-third or more adult content to verify every visitor’s age. Some adult sites have responded to the law by shutting down their services in Texas. The Free Speech Coalition challenged the law on First Amendment grounds, arguing that mandatory age verification does more than keep minors away from porn — the law nannies adults as well, barring them from constitutionally protected speech.

The district court agreed with the challengers. Laws regulating speech because of its content (i.e., because it is sexually explicit) are presumed invalid. Under strict scrutiny, the state must show that its regulation is narrowly tailored to serve a compelling government interest. In other words, the government needs an exceptionally good reason to regulate, and it can’t regulate more speech than necessary.

The case will turn on what level of scrutiny applies. Protecting minors from obscene speech is a permissible state interest, as the Fifth Circuit court established when it applied the lowest form of scrutiny — rational basis review — to uphold the law. But not all speech that is obscene to minors is obscene to adults. Judge Higginbotham, dissenting from the Fifth Circuit’s decision, pointed out that kids might have no right to watch certain scenes from Game of Thrones — but adults do.

In previous cases regulating minors’ access to explicit content, the Supreme Court applied strict scrutiny specifically because the laws restricted adult access to protected speech. Texas hopes to get around decades of precedent by arguing that there is no way that age verification “could reasonably chill adults’ willingness” to visit porn sites. If adults don’t care about age verification, Texas reasons, nothing in the law stops them from viewing sexually explicit material: No protected speech is regulated.

There’s just one problem: Adults do care about age verification.

H.B. 1181 bars age verification providers from retaining “identifying” information. But nothing in the law stops providers from sharing that same info, and people are rightly concerned about whether their private sexual desires will stay private. That you visited an adult site is bad enough. Getting your personal Pornhub search history leaked along with your government ID is enough to make even the most shameless person consider changing their name and becoming a hermit.

Texas swears up and down that age verification tech is secure, but that doesn’t inspire confidence in anyone following cybersecurity news. Malware is out there. Data leaks happen.

A bored employee glancing at your driver’s license as you walk into the sex shop is not the same thing as submitting to a biometric face scan and algorithmic ID verification, by order of the government, before you can press play on a dirty video. Just thinking about it kills the mood, which may be part of the point.

Texas pretends there’s no difference between the bored bouncer and biometric scans, but if you knew the bouncer had an encyclopedic, inhuman ability to remember every name and face that came through the door and loose lips, well, you wouldn’t go there either.

Hand-waving away these differences is the kind of thing you only do if you’re highly ideologically motivated. But normal people are very reasonably concerned about whether their personal sexual preferences will be leaked to their boss, mother-in-law, or fellow citizens. Mandatory age verification turns people off of viewing porn entirely, and it chills their free expression.

Sexual preferences are private and sensitive; they’re exactly the type of thing you don’t want leaking. So, of course, sexual content is a particularly juicy target for would-be hackers and extortionists. People pay handsomely to keep “sextortion” quiet. If you’re worried about your privacy and you don’t trust the age verification software (you shouldn’t), you’re likely to avoid the risk up front. One adult site says only 6% of visitors go through age verification and that even fewer succeed. Thus the chilling effect: even though adult access to porn is technically legal, people are so afraid of having their ID and last watched video plastered across the internet that they stop watching in the first place.

If the Supreme Court recognizes this and applies strict scrutiny, it will ask whether less restrictive means could protect minors. Back in 2004, the Court tossed out COPA, a law requiring credit card verification to access sexually explicit materials, reasoning that blocking and filtering software would protect minors without burdening adult speech. Today’s filtering software is far more effective than what was available twenty years ago — as the district court found — and, notably, filtering software doesn’t scan adults’ faces.

Sex — a “subject of absorbing interest to mankind,” as one justice once put it — matters. Adults have the right to sexually explicit speech, free of the fear that their identifying information will be leaked or sent to the state. Texas can and should seek to protect kids without stoking that fear.

Santana Boulton is a legal fellow at TechFreedom and a Young Voices contributor. Her commentary has appeared in TechDirt. Follow her on X: @santanaboulton.

Filed Under: 1st amendment, age gating, age verification, free speech, ken paxton, porn, strict scrutiny, supreme court, texas
Companies: free speech coalition

Didn’t We Already Do This? Twenty Years After Supreme Court Rejected Age Verification Law, It Takes Up New Case

from the 5th-circuit-itching-for-another-smackdown dept

Just when you thought the internet was safe from the meddling minds of the Supreme Court, the Justices have decided to take another crack at reviewing whether or not a new set of state regulations of the internet violates the First Amendment. And this time, it has a “but won’t you think of the children online” element to it as well.

Just a day after concluding decisions for the last term and (thankfully) not destroying the internet with its NetChoice decisions, the Supreme Court released a new order list regarding petitions for cert and announced that it would be taking Free Speech Coalition’s challenge to Texas’ internet age verification law, giving it yet another chance to potentially screw up the internet (or, hopefully, to reinforce free speech rights).

Image

If you haven’t been following this case, it’s an important one for the future of privacy and speech online, so let’s bring everyone up to speed.

Two decades ago, there was an early moral panic about kids on the internet, and Congress went nuts passing a variety of laws aiming to “protect the children online.” Two of the bigger attempts — the Communications Decency Act and the Child Online Protection Act — were dumped as unconstitutional in Reno v. ACLU and Ashcroft v. ACLU.

Among other things, the Reno case established that the First Amendment still applies in online scenarios (meaning governments can’t pass laws that suppress free speech online) and the Ashcroft case established that age restricting access to content online was unconstitutional as it failed “strict scrutiny” (necessary to uphold a law that has an impact on speech). In large part, it failed strict scrutiny because it was not the “least restrictive means” of protecting children and would both likely block kids from accessing content they had a First Amendment right to access while also blocking adults from content they had a right to access.

However, we’re deep in the midst of a very similar moral panic about “the kids online” these days, despite little actual evidence to support the fearmongering. Nonetheless, a ton of states have been passing all kinds of “protect the kids online” laws. This is across both Republican and Democrat-controlled states, so it’s hardly a partisan type of moral panic.

Multiple courts have been (rightly) tossing these laws out as unconstitutional one after another, with many (rightly) pointing to the decision in Ashcroft and pointing out that the Supreme Court already decided this.

Many of the age verification laws (especially those in Republican-controlled states) have been focused specifically on adult content websites, saying those sites in particular are required to age gate. And while it makes sense that children should not have easy access to pornographic content, there are ways to limit such access without using problematic age verification technology, which puts privacy at risk and is not particularly effective. Indeed, just a couple weeks ago, an age verification vendor used by many internet companies was found to have leaked personal data on millions of people.

Allowing age verification laws online would do tremendous damage to the internet, to kids, and to everyone. It would create a regime where anonymity online would be effectively revoked, and people’s private data would be at risk any time they’re online. People keep pitching ideas around “privacy-protective age verification” which is one of those concepts, like “safe backdoors to encryption,” that politicians seem to think is doable, but in reality is impossible.

One of the many states that passed such a law was Texas, and like most other states (the only exceptions to date have been on procedural grounds in states where a suit can’t be filed until someone takes action against a site for failing to age-gate) the district court quickly tossed out the law as obviously unconstitutional under the Ashcroft ruling.

But, just months later, the Fifth Circuit (as it has been known to do the past few years) decided that it could ignore Supreme Court precedent, overturn the lower court, and put the law back into effect. I wrote a big long post explaining the nutty thinking behind all this, but in effect, the Fifth Circuit decided that it didn’t have to follow Ashcroft because that only dealt with “strict scrutiny,” and the Judges on the Fifth Circuit believed that a law like this need only face intermediate scrutiny, and on that basis the law was fine.

Again, this bucked every possible precedent. And just last week, as yet another trial court, this time in Indiana, threw out a similar law, the judge there walked through all the many reasons the Fifth Circuit got things wrong (the Indiana court was not bound by the Fifth Circuit, but the state of Indiana had pointed to the Fifth’s ruling in support of its law).

Back in April, we had explained why it was important for the Supreme Court to review the Fifth Circuit’s bizarre ruling, and that’s where things stand now, thanks to them granting cert.

Of course, it’s anyone’s guess as to how the Supreme Court will rule, though there are a few signs that suggest it may use this to smack down the Fifth Circuit and remind everyone that Ashcroft was decided correctly. First, especially this past term, the Supreme Court has been aggressively smacking down the Fifth Circuit and its series of crazy rogue rulings. So it’s already somewhat primed to look skeptically at rulings coming out of the nation’s most ridiculous appeals court.

Second, if the Fifth’s reasoning wasn’t nutty, then there would be little to no reason to take the case. Again, the Court already handled nearly this very issue twenty years ago, and the Fifth Circuit is the first to say it can just ignore that ruling.

That said, any time the Supreme Court takes up an internet issue, you never quite know how it’s going to end up, especially given Justice Kagan’s own comment on herself and her colleagues that “these are not, like, the nine greatest experts on the internet.”

On top of that, any time you get into “for the children” moral panics, people who might otherwise be sensible seem to lose their minds. Hopefully, the Supreme Court takes a more sober approach to this case, but I recognize that “sober analysis” and this particular Supreme Court are not always things that go together.

Filed Under: 1st amendment, 5th circuit, age verification, free speech, supreme court
Companies: free speech coalition

Court To Indiana: Age Verification Laws Don’t Override The First Amendment

from the strict-scrutiny dept

We keep pointing out that, contrary to the uninformed opinion of lawmakers across both major parties, laws that require age verification are clearly unconstitutional*.

* Offer not valid in the 5th Circuit.

Such laws have been tossed out everywhere as unconstitutional, except in Texas (and even then, the district court got it right, and only the 5th Circuit is confused). And yet, we hear about another state passing an age verification law basically every week. And this isn’t a partisan/culture war thing, either. Red states, blue states, purple states: doesn’t matter. All seem to be exploring unconstitutional age verification laws.

Indiana came up with one last year, which targeted adult content sites specifically. And, yes, there are perfectly good arguments that kids should not have access to pornographic content. However, the Constitution does not allow for any such restriction to be done in a sloppy manner that is both ineffective at stopping kids and likely to block protected speech. And yet, that’s what every age-gating law does. The key point is that there are other ways to restrict kids’ access to porn, rather than age-gating everything. But they often involve this thing called parenting.

Thus, it’s little surprise that, following a legal challenge by the Free Speech Coalition, Indiana’s law has been put on hold by a court that recognizes the law is very likely unconstitutional.

The court starts out by highlighting that geolocating is an extraordinarily inexact science, which is a problem, given that the law requires adult content sites to determine when visitors are from Indiana and to age verify them.

But there is a problem: a computer’s IP address is not like a return address on an envelope because an IP address is not inherently tied to any location in the real world but consists of a unique string of numbers written by the Internet Service Provider for a large geographic area. (See id. ¶¶ 12–13). This means that when a user connects to a website, the website will only know the user is in a circle with a radius of 60 miles. (Id. ¶ 14). Thus, if a user near Springfield, Massachusetts, were to connect to a website, the user might be appearing to connect from neighboring New York, Connecticut, Rhode Island, New Hampshire, or Vermont. (Id.). And a user from Evansville, Indiana, may appear to be connecting from Illinois or Kentucky. The ability to determine where a user is connecting from is even weaker when using a phone with a large phone carrier such as Verizon with error margins up to 1,420 miles. (Id. ¶¶ 16, 19). Companies specializing in IP address geolocation explain the accuracy of determining someone’s state from their IP address is between 55% and 80%. (Id. ¶ 17). Internet Service Providers also continually change a user’s IP address over the course of the day, which can make a user appear from different states at random.

Also, users can hide their real IP address in various ways:

Even when the tracking of an IP address is accurate, however, internet users have myriad ways to disguise their IP address to appear as if they are located in another state. (Id. ¶ B (“Website users can appear to be anywhere in the world they would like to be.”)). For example, when a user connects to a proxy server, they can use the proxy server’s IP address instead of their own (somewhat like having a PO box in another state). (Id. ¶ 22). ProxyScrape, a free service, allows users to pretend to be in 129 different countries for no charge. (Id.). Virtual Private Network (“VPN”) technology allows something similar by hiding the user’s IP address to replace it with a fake one from somewhere else.

All these methods are free or cheap and easy to use. (Id. ¶¶ 21–28). Some even allow users to access the dark web with just a download. (Id. ¶ 21). One program, TOR, is specifically designed to be as easy to use as possible to ensure as many people can be as anonymous as possible. (Id.). It is so powerful that it can circumvent Chinese censors.

The reference to “Chinese censors” is a bit weird, but okay, point made: if people don’t want to appear as if they’re from Indiana, they can do so.

The court also realizes that just blocking adult content websites won’t block access to other sources of porn. The ruling probably violates a bunch of proposed laws against content that is “harmful to minors” by telling kids how to find porn:

Other workarounds include torrents, where someone can connect directly to another computer—rather than interacting with a website—to download pornography. (Id. ¶ 29). As before, this is free. (Id.). Minors could also just search terms like “hot sex” on search engines like Bing or Google without verifying their age. (Id. ¶ 32–33). While these engines automatically blur content to start, (Glogoza Decl. ¶¶ 5–6), users can simply click a button turning off “safe search” to reveal pornographic images, (Sonnier Decl. ¶ 32). Or a minor could make use of mixed content websites below the 1/3 mark like Reddit and Facebook

And thus, problem number one with age verification: it’s not going to be even remotely effective for achieving the policy goals being sought here.

With this background, it is easy to see why age verification requirements are ineffective at preventing minors from viewing obscene content. (See id. ¶¶ 14–34 (discussing all the ways minors could bypass age verification requirements)). The Attorney General submits no evidence suggesting that age verification is effective at preventing minors from accessing obscene content; one source submitted by the Attorney General suggests there must be an “investigation” into the effectiveness of preventive methods, “such as age verification tools.

And that matters. Again, even if you agree with the policy goals, you should recognize that putting in place an ineffective regulatory regime that is easily bypassed is not at all helpful, especially given that it might also restrict speech for non-minors.

Unlike the 5th Circuit, this district court in Indiana understands the precedents related to this issue and knows that Ashcroft v. ACLU already dealt with the main issue at play in this case:

In the case most like the one here, the Supreme Court affirmed the preliminary enjoinment of the Child Online Protection Act. See Ashcroft II, 542 U.S. at 660–61. That statute imposed penalties on websites that posted content that was “harmful to minors” for “commercial purposes” unless those websites “requir[ed the] use of a credit card” or “any other reasonable measures that are feasible under available technology” to restrict the prohibited materials to adults. 47 U.S.C. § 231(a)(1). The Supreme Court noted that such a scheme failed to clear the applicable strict scrutiny bar. Ashcroft II, 542 U.S. at 665–66 (applying strict scrutiny test). That was because the regulations were not particularly effective as it was easy for minors to get around the requirements, id. at 667– 68, and failed to consider less restrictive alternatives that would have been equally effective such as filtering and blocking software, id. at 668–69 (discussing filtering and blocking software). All of that is equally true here, which is sufficient to resolve this case against the Attorney General.

Indiana’s Attorney General points to the 5th Circuit ruling that tries to ignore Ashcroft, but the judge here is too smart for that. He knows he’s bound by the Supreme Court, not whatever version of Calvinball the 5th Circuit is playing:

Instead of applying strict scrutiny as directed by the Supreme Court, the Fifth Circuit applied rational basis scrutiny under Ginsberg v. New York, 390 U.S. 629 (1968), even though the Supreme Court explained how Ginsberg was inapplicable to these types of cases in Reno, 521 U.S. at 865–66. The Attorney General argues this court should follow that analysis and apply rational basis scrutiny under Ginsberg.

However, this court is bound by Ashcroft II. See Agostini v. Felton, 521 U.S. 203, 237–38 (1997) (explaining lower courts “should follow the case which directly controls”). To be sure, Ashcroft II involved using credit cards, and Indiana’s statute requires using a driver’s license or third-party identification software.10 But as discussed below, this is not sufficient to take the Act beyond the strictures of strict scrutiny, nor enough to materially advance Indiana’s compelling interest, nor adequate to tailor the Act to the least restrictive means.

And thus, strict scrutiny must apply, unlike in the 5th Circuit, and this law can’t pass that bar.

Among other things, the age verification in this law doesn’t just apply to material that is obscene to minors:

The age verification requirements do not just apply to obscene content and also burden a significant amount of protected speech for two reasons. First, Indiana’s statute slips from the constitutional definition of obscenity and covers more material than considered by the Miller test. This issue occurs with the third prong of Indiana’s “material harmful to minors” definition, where it describes the harmful material as “patently offensive” based on “what is suitable matter for . . . minors.” Ind. Code § 35- 49-2-2. It is well established that what may be acceptable for adults may still be deleterious (and subject to restriction) to minors. Ginsberg, 390 U.S. at 637 (holding that minors “have a more restricted right than that assured to adults to judge and determine for themselves what sex material they may read or see”); cf. ACLU v. Ashcroft, 322 F.3d 240, 268 (3d Cir. 2003) (explaining the offensiveness of materials to minors changes based on their age such that “sex education materials may have ‘serious value’ for . . . sixteen-year-olds” but be “without ‘serious value’ for children aged, say, ten to thirteen”), aff’d sub nom. in relevant part, 542 U.S. 656 (2004). Put differently, materials unsuitable for minors may not be obscene under the strictures of Miller, meaning the statute places burdens on speech that is constitutionally protected but not appropriate for children

Also, even if the government has a compelling interest in protecting kids from adult content, this law doesn’t actually do a good job of that:

To be sure, protecting minors from viewing obscene material is a compelling interest; the Act just fails to further that interest in the constitutionally required way because it is wildly underinclusive when judged against that interest. “[A] law cannot be regarded as protecting an interest ‘of the highest order’ . . . when it leaves appreciable damage to that supposedly vital interest unprohibited.” …

The court makes it clear how feeble this law is:

To Indiana’s legislature, the materials harmful to minors are not so rugged that the State believes they should be unavailable to adults, nor so mentally debilitating to a child’s mind that they should be completely inaccessible to children. The Act does not function as a blanket ban of these materials, nor ban minors from accessing these materials, nor impose identification requirements on everybody displaying obscene content. Instead, it only circumscribes the conduct of websites who have a critical mass of adult material, whether they are currently displaying that content to a minor or not. Indeed, minors can freely access obscene material simply by searching that material in a search engine and turning off the blur feature. (Id. ¶¶ 31–33). Indiana’s legislature is perfectly willing “to leave this dangerous, mind-altering material in the hands of children” so long as the children receive that content from Google, Bing, any newspaper, Facebook, Reddit, or the multitude of other websites not covered.

The court also points out how silly it is that the law only applies to sites with a high enough threshold (33%) of adult content. If the goal is to block kids’ access to porn, that’s a stupid way to go about it. Indeed, the court effectively notes that a website could get around the ban just by adding a bunch of non-adult imagery content.

The Attorney General has not even attempted to meet its burden to explain why this speaker discrimination is necessary to or supportive of to its compelling interest; why is it that a website that contains 32% pornographic material is not as deleterious to a minor as a website that contains 33% pornographic material? And why does publishing news allow a website to display as many adult-images as it desires without needing to verify the user is an adult? Indeed, the Attorney General has not submitted any evidence suggesting age verification would prohibit a single minor from viewing harmful materials, even though he bears the burden of demonstrating the effectiveness of the statute. Ultimately, the Act favors certain speakers over others by selectively imposing the age verification burdens. “This the State cannot do.” Sorrell v. IMS Health Inc., 564 U.S. 552, 580 (2011). The Act is likely unconstitutional.

In a footnote, the judge highlights an even dumber part of the law: that the 33% is based on the percentage of imagery, and gives a hypothetical of a site that would be required to age gate:

Consider a blog that discusses new legislation the author would like to see passed. It contains hundreds of posts discussing these proposals. The blog does not include images save one exception: attached to a proposal suggesting the legislature should provide better sexual health resources to adult-entertainment performers is a picture of an adult-entertainer striking a raunchy pose. Even though 99% of the blog is core political speech, adults would be unable to access the website unless they provide identification because the age verification provisions do not trigger based on the amount of total adult content on the website, but rather based on the percentage of images (no matter how much text content there is) that contain material harmful to minors.

The court suggests some alternatives to this law, from requiring age verification for accessing any adult content (though, it notes that’s also probably unconstitutional, even if it’s less restrictive) to having the state offer up free filtering and blocking tech for parents to make use of for their kids:

Indiana could make freely available and/or require the use of filtering and blocking technology on minors’ devices. This is a superior alternative. (Sonnier Decl. ¶ 47 (“Internet content filtering is a superior alternative to Internet age verification.”); see also Allen Decl. ¶¶ 38–39 (not disputing that content filtering is superior to age verification as “[t]he Plaintiff’s claim makes a number of correct positive assertions about content filtering technology” but noting “[t]here is no reason why both content filtering and age verification could not be deployed either consecutively or concurrently”)). That is true for the reasons discussed in the background section: filtering and blocking software is more accurate in identifying and blocking adult content, more difficult to circumvent, allows parents a place to participate in the rearing of their children, and imposes fewer costs on third-party websites.

And thus, due to the fact that the law is pretty obviously unconstitutional, the judge grants the injunction, blocking the law from going into effect. Indiana will almost certainly appeal and we’ll have to just keep going through this nonsense over and over again.

Thankfully, Indiana is in the 7th Circuit, not the 5th, so there’s at least somewhat less of a chance for pure nuttery on appeal.

Filed Under: 1st amendment, age gating, age verification, filters, indiana, protect the children, strict scrutiny, todd rokita
Companies: free speech coalition