think of the children – Techdirt (original) (raw)

The Conservatives On The Supreme Court Are So Scared Of Nudity, They’ll Throw Out The First Amendment

from the seems-bad dept

The Supreme Court this morning took a chainsaw to the First Amendment on the internet, and the impact is going to be felt for decades going forward. In the FSC v. Paxton case, the Court upheld the very problematic 5th Circuit ruling that age verification online is acceptable under the First Amendment, despite multiple earlier Supreme Court rulings that said the opposite.

Justice Thomas wrote the 6-3 majority opinion, with Justice Kagan writing the dissent (joined by Sotomayor and Jackson). The practical effect: states can now force websites to collect government IDs from anyone wanting to view adult content, creating a massive chilling effect on protected speech and opening the door to much broader online speech restrictions.

Thomas accomplished this by pulling off some remarkable doctrinal sleight of hand. He ignored the Court’s own precedents in Ashcroft v. ACLU by pretending online age verification is just like checking ID at a brick-and-mortar store (it’s not), applied a weaker “intermediate scrutiny” standard instead of the “strict scrutiny” that content-based speech restrictions normally require, and—most audaciously—invented an entirely new category of “partially protected” speech that conveniently removes First Amendment protections exactly when the government wants to burden them. As Justice Kagan’s scathing dissent makes clear, this is constitutional law by result-oriented reasoning, not principled analysis.

As we’ve noted, in cases like Ashcroft v. ACLU and Brown v. EMA, the Supreme Court had long established that states couldn’t just throw around vague claims of “harmful to minors” to ignore the First Amendment, or at the very least to lower the standard of scrutiny from “strict scrutiny” to “intermediate scrutiny” (though not, as Ken Paxton hoped, all the way down to “rational basis.”).

The real danger here isn’t just Texas’s age verification law—it’s that Thomas has handed every state legislature a roadmap for circumventing the First Amendment online. His reasoning that “the internet has changed” and that intermediate scrutiny suffices for content-based restrictions will be cited in countless future cases targeting online speech. Expect age verification requirements to be attempted for social media platforms (protecting kids from “harmful” political content), for news sites (preventing minors from accessing “disturbing” coverage), and for any online speech that makes moral authorities uncomfortable.

And yes, to be clear, the majority opinion seeks to limit this just to content deemed “obscene” to avoid such problems, but it’s written so broadly as to at least open up challenges along these lines.

Thomas’s invention of “partially protected” speech, that somehow means you can burden those for which it is protected, is particularly insidious because it’s infinitely expandable. Any time the government wants to burden speech, it can simply argue that the burden is built into the right itself—making First Amendment protection vanish exactly when it’s needed most. This isn’t constitutional interpretation; it’s constitutional gerrymandering.

The conservative justices may think they’re just protecting children from pornography, but they’ve actually written a permission slip for the regulatory state to try to control online expression. The internet that emerges from this decision will look much more like the one authoritarian governments prefer: where every click requires identification, where any viewpoint can be age-gated, and where anonymity becomes a luxury only the powerful can afford. Thomas’s “starch” in constitutional standards? It just got bleached out of existence.

Texas, like many States, prohibits the distribution of sexually explicit content to children. Tex. Penal Code Ann. §43.24(b) (West 2016). But, although that prohibition may be effective against brick-and-mortar stores, it has proved challenging to enforce against online content. In an effort to address this problem, Texas enacted H. B. 1181, Tex. Civ. Prac. & Rem. Code Ann. §129B.001 et seq. (West Cum. Supp. 2024), which requires certain commercial websites that publish sexually explicit content to verify the ages of their visitors. This requirement furthers the lawful end of preventing children from accessing sexually explicit content. But, it also burdens adult visitors of these websites, who all agree have a First Amendment right to access at least some of the content that the websites publish. We granted certiorari to decide whether these burdens likely render H. B. 1181 unconstitutional under the Free Speech Clause of the First Amendment. We hold that they do not. The power to require age verification is within a State’s authority to prevent children from accessing sexually explicit content. H. B. 1181 is a constitutionally permissible exercise of that authority.

There’s a lot of throat clearing in the majority opinion regarding the government’s power to block access to “obscene” material, and where it can limit access by children to sexually explicit material. That’s well-worn territory. The issue here is that with online age verification you have some very significant problems—which the Supreme Court used to recognize: the burden on adults of having to prove their age (and relinquish significant privacy in doing so) as well as the fact that the tech sucks and frequently gets stuff wrong.

But Thomas seems to act as though this is a simple extension of laws that prohibit stores from selling adult magazines to kids.

Obscenity is no exception to the widespread practice of requiring proof of age to exercise age-restricted rights. The New York statute upheld in Ginsberg required age verification: It permitted a seller who sold sexual material to a minor to raise “‘honest mistake’” as to age as an affirmative defense, but only if the seller had made “‘a reasonable bona fide attempt to ascertain the true age of [the] minor.’” 390 U. S., at 644. Most States to this day also require age verification for in-person purchases of sexual material. And, petitioners concede that an in-person age verification requirement is a “traditional sort of law” that is “almost surely” constitutional. Tr. of Oral Arg. 17.

The facts of Ginsberg illustrate why age verification, as a practical matter, is necessary for an effective prohibition on minors accessing age-inappropriate sexual content. The statute in that case prohibited the knowing sale of sexual content to a minor under the age of 17. 390 U. S., at 633. The defendant was convicted of knowingly selling a pornographic magazine to a 16-year-old. Id., at 631. But, most of the time, it is almost impossible to distinguish a 16-yearold from a 17-year-old by sight alone. Thus, had the seller in Ginsberg not had an obligation to verify the age of the purchaser, he likely could have avoided liability simply by asserting ignorance as to the purchaser’s age. Only an age-verification requirement can ensure compliance with an age-based restriction.

Thomas then claims that “The need for age verification online is even greater” and even cites Brown v. EMA (which found California’s law preventing the sale of violent video games unconstitutional) to somehow… support the argument here?

Thomas then falsely claims that the law does not regulate the speech of adults, which clearly goes against the opinion in Ashcroft.

Because H. B. 1181 simply requires proof of age to access content that is obscene to minors, it does not directly regulate the protected speech of adults…. On its face, the statute regulates only speech that is obscene to minors. That speech is unprotected to the extent the State seeks only to verify age. And, the statute can easily “be justified without reference to the [protected] content of the regulated speech,” because its apparent purpose is simply to prevent minors, who have no First Amendment right to access speech that is obscene to them, from doing so.

That’s legal fiction dressed up as statutory interpretation. Age verification requirements absolutely burden adult access to protected speech—that’s the entire point of challenging them.

The majority admits that there is some First Amendment concern here, but argues that it doesn’t require strict scrutiny… in part because that would make all age verification laws suspect, even those for brick-and-mortar stores, which Thomas uses as a kind of “gotcha” to support his argument that it’s fine online as well:

Applying the more demanding strict-scrutiny standard would call into question the validity of all age-verification requirements, even longstanding requbirements for brickand-mortar stores. But, as petitioners acknowledge, after Ginsberg, no serious question about the constitutionality of in-person age-verification requirements for obscenity to minors has arisen. See Tr. of Oral Arg. 43 (acknowledging that they “don’t know of any . . . challenge being brought” to an age-verification requirement for “brick-and-mortar stores”). Petitioners insist that their proposed rule would not call into question these “traditional” requirements, because such requirements would “almost surely satisfy” strict scrutiny. Id., at 17. They also contend that a sufficiently tailored online age-verification requirement (although not Texas’s) could satisfy strict scrutiny too. Id., at 6–8. But, if we are not to compromise “‘[t]he “starch” in our constitutional standards,’” we cannot share petitioners’ confidence.

Thomas is doing exactly what he rails against in other contexts: turning the First Amendment into a mushy balancing test instead of a clear constitutional command. The only difference here is that sexual content apparently makes him squeamish enough to abandon his usual textualist principles.

To get around the ruling in Ashcroft, he claims that COPA (the law it invalidated) was actually a ban on content harmful to minors, even as he eventually admits that COPA (like the Texas law at issue) had an age-verification requirement that would allow such content to be published. So what is the difference? The majority claims that with COPA the age-verification aspect was an affirmative defense, whereas with the Texas HB 1181 law, it’s a mandate. To me, that makes the Texas law even more of a problem and a burden, but Thomas reads it the other way:

To be sure, COPA established an age-verification defense. Id., at 662. But, because it did so only as an affirmative defense, COPA still operated as a ban on the public posting of material that is obscene to minors. See id., at 661–662 (citing 47 U. S. C. §§231(a)(1), (c)(1)). This was so because an indictment need only “alleg[e] the necessary elements of an offense”; it need not “anticipate affirmative defenses.” United States v. Sisson, 399 U. S. 267, 287–288 (1970). Under COPA, the Government thus remained free to bring criminal charges against any covered person who publicly posted speech that was obscene to minors, even if he had fully implemented compliant age-verification procedures.

While the majority opinion is written to suggest it only applies directly to “pornographic” content deemed “obscene to children,” it’s really taking an axe to the fundamental ruling in the Reno case (which tossed out most of the Communications Decency Act) and Ashcroft. Thomas claims this is okay because the internet is different now:

In the quarter century since the factual record closed in Ashcroft II, the internet has expanded exponentially. In 1999, only two out of five American households had home internet access. Dept. of Commerce, Census Bureau, Home Computers and Internet Use in the United States: Aug. 2000, p. 2 (2001). Nearly all those households used a desktop computer or laptop to connect to the internet, and most used a dial-up connection. Dept. of Commerce, Economics and Statistics Admin., A Nation Online: Entering the Broadband Age 1, 5 (2004). Connecting through dial-up came with significant limitations: Dial-up is much slower than a modern broadband connection, and because dial-up relied on the home’s phone line, many households could not use the internet and make or receive phone calls at the same time. See Inline Connection Corp. v. AOL Time Warner Inc., 302 F. Supp. 2d 307, 311 (Del. 2004). And, “video-on-demand” was largely just a notion that figures like “Bill Gates and Al Gore rhapsodize[d] about”; “most Netizens would [have] be[en] happy with a system fast enough to view static photos without waiting an age.” Kennedy 493–494.

In contrast, in 2024, 95 percent of American teens had access to a smartphone, allowing many to access the internet at almost any time and place. M. Faverio & O. Sidoti, Pew Research Center, Teens, Social Media and Technology 2024, p. 19. Ninety-three percent of teens reported using the internet several times per day, and watching videos is among their most common activities online. Id., at 4–5, 20. The content easily accessible to adolescents online includes massive libraries of pornographic videos. For instance, in 2019, Pornhub, one of the websites involved in this case, published 1.36 million hours—or over 150 years—of new content. App. 177. Many of these readily accessible videos portray men raping and physically assaulting women—a far cry from the still images that made up the bulk of online pornography in the 1990s. See N. Kristof, The Children of Pornhub, N. Y. Times, Dec. 6, 2020, p. SR4. The Court in Reno and Ashcroft II could not have conceived of these developments, much less conclusively resolve how States could address them.

The majority claims that those rulings “do not cease to be precedential simply because technology has changed so dramatically” but that they can be limited because so many people have the internet.

That argument is bonkers and dangerous. If “more people use technology now” justifies weakening constitutional protections, then every digital right is up for grabs. That line will now show up in briefings across the country as states argue that widespread internet adoption somehow diminishes the First Amendment’s force online.

It is misleading in the extreme to assume that Reno and Ashcroft II spoke to the circumstances of this case simply because they both dealt with “the internet” as it existed in the 1990s. The appropriate standard of scrutiny to apply in this case is a difficult question that no prior decision of this Court has squarely addressed.

That’s a shot across the bow of free speech online. It’s Justice Thomas saying it’s “open season” to seek to regulate speech online.

The opinion then spends a lot of time explaining why intermediate scrutiny is the right standard, and not strict scrutiny (as FSC wanted) or “rational basis” (as Texas wanted). This feels like Thomas trying to split the baby (which, I should remind you, kills the baby) and pretending to compromise. It’s not a compromise. It’s a full frontal assault on internet speech.

The dissent, by Kagan, understands this problematic result:

The majority’s opinion concluding to the contrary is, to be frank, confused. The opinion, to start with, is at war with itself. Parts suggest that the First Amendment plays no role here—that because Texas’s law works through age verification mandates, the First Amendment is beside the point. See ante, at 13–18. But even the majority eventually gives up that ghost. As, really, it must. H. B. 1181’s requirements interfere with—or, in First Amendment jargon, burden—the access adults have to protected speech: Some individuals will forgo that speech because of the need to identify themselves to a website (and maybe, from there, to the world) as a consumer of sexually explicit expression. But still, the majority proposes, that burden demands only intermediate scrutiny because it arises from an “incidental” restriction, given that Texas’s statute uses age verification to prevent minors from viewing the speech. See ante, at 13, 18–19. Except that is wrong—nothing like what we have ever understood as an incidental restraint for First Amendment purposes. Texas’s law defines speech by content and tells people entitled to view that speech that they must incur a cost to do so. That is, under our First Amendment law, a direct (not incidental) regulation of speech based on its content—which demands strict scrutiny.

Kagan takes issue with Thomas’ claim that this case is somehow different from the existing precedents:

The majority’s attempt to distinguish our four precedents saying just that rounds out the list of its errors. According to the majority, all of those decisions involved prohibiting rather than merely burdening adults’ access to obscene-forchildren speech. See ante, at 21. But that is not true. And in any event it would not matter: The First Amendment prevents making speech hard, as well as banning it outright. So on all accounts the majority’s rationale craters.

The majority is not shy about why it has adopted these special-for-the-occasion, difficult-to-decipher rules. It thinks they are needed to get to what it considers the right result: giving Texas permission to enforce its statute. See ante, at 19–21. But Texas should not receive that permission if it can achieve its goal as to minors while interfering less with the speech choices of adults. And if it cannot, then Texas’s statute would survive strict scrutiny, given the obvious importance of its goal. For that reason, the majority’s analysis is as unnecessary as it is unfaithful to the law.

The dissent also calls out the very real burdens that online age-verification creates that brick-and-mortar age verification does not. This is a point that Thomas effectively ignores:

Recall how the statute works. To enter a covered website—with all the protected speech just described—an individual must verify his age by using either a “government-issued identification” like a driver’s license or “transactional data” associated with things like a job or mortgage. §§129B.001(7), 129B.003(b)(2); see ante, at 2–3. For the would-be consumer of sexually explicit materials, that requirement is a deterrent: It imposes what our First Amendment decisions often call a “chilling effect.” E.g., Americans for Prosperity Foundation v. Bonta, 594 U. S. 595, 606 (2021). It is not, contra the majority, like having to flash ID to enter a club. See ante, at 14–15. It is turning over information about yourself and your viewing habits—respecting speech many find repulsive—to a website operator, and then to . . . who knows? The operator might sell the information; the operator might be hacked or subpoenaed. We recognized the problem in a case involving sexual material on cable TV: Similar demands, we decided, would “restrict viewing by subscribers who fear for their reputations should the operator, advertently or inadvertently, disclose the list of those who wish to watch the ‘patently offensive’ channel.” Denver Area Ed. Telecommunications Consortium, Inc. v. FCC, 518 U. S. 727, 754 (1996). The internet context can only increase the fear. And the Texas law imposes costs not just on potential users, but on website operators too. They must either implement a system costing (the District Court found) at least 40,000forevery100,000verifications,orelsepaypenaltiesof40,000 for every 100,000 verifications, or else pay penalties of 40,000forevery100,000verifications,orelsepaypenaltiesof10,000 per day.

The dissent specifically highlights how this case was nearly identical to Ashcroft, and the majority is simply making up random reasons to pretend it’s different. Amusingly, Kagan cites Thomas’s concurrence in Ashcroft to make that point.

And the denouement: The statute the Court addressed in Ashcroft v. American Civil Liberties Union, 542 U. S. 656 (2004), was a near-twin of Texas’s. The Child Online Protection Act (COPA) prohibited commercial entities from posting on the internet content “harmful to minors.” Id., at 661 (quoting 47 U. S. C. §231(a)(1)). And just like H. B. 1181, that statute defined the covered material by adapting the Miller obscenity test for children—thus creating a category of obscene-for-children speech. See 542 U. S., at 661– 662; supra, at 4. So too, COPA made the adoption of an age verification system crucial. It did so by providing an affirmative defense to any entity that verified age through an “adult personal identification number” or similar mechanism before granting access to the posted materials. Ashcroft, 542 U. S., at 662. So, as in H. B. 1181, if the poster verified age, no liability could attach. How, then, to analyze such a statute? The Court viewed the problem as it had in prior cases: COPA, though directed at keeping sexually explicit materials from children, “was likely to burden some speech that is protected for adults.” Id., at 665. And because of that “content-based restriction[],” the Court needed to apply strict scrutiny. Id., at 660, 665, 670. The Government thus had to show that “the proposed alternatives will not be as effective as the challenged statute.” Id., at 665. In short, Ashcroft adhered to the view that “‘the governmental interest in protecting children from harmful materials’ does not ‘justify an unnecessarily broad suppression of speech addressed to adults.’” Lorillard Tobacco Co. v. Reilly, 533 U. S. 525, 581 (2001) (THOMAS, J., concurring in part and concurring in judgment) (quoting Reno, 521 U. S., at 875).

Kagan then calls out how the majority ruling creates an entirely new category of First Amendment speech: “partially protected” speech.

The majority tries to escape that conclusion with a maneuver found nowhere in the world of First Amendment doctrine. It turns out, the majority says, that the First Amendment only “partially protects” the speech in question: The “speech is unprotected to the extent the State seeks only to verify age.” Ante, at 18, 29, n. 12 (emphasis deleted); see ante, at 28 (the speech is “unprotected to the extent that the State imposes only an age-verification requirement”). Meaning, the speech is unprotected to the extent that the State is imposing the very burden under review. Or said another way, the right of adults to view the speech has the burden of age verification built right in. That is convenient, if altogether circular. In the end, the majority’s analysis reduces to this: Requiring age verification does not directly burden adults’ speech rights because adults have no right to be free from the burden of age verification. Gerrymander the right to incorporate the burden, and the critical conclusion follows. If only other First Amendment cases were so easy!

As for Thomas’s argument that “the internet is different now,” well, Kagan points out that may make the facts of a case different, but should never change the level of scrutiny:

That leaves only the majority’s claim—again mistaken— that the internet has changed too much to follow our precedents’ lead. See ante, at 25–27. Of course technology has developed, both swiftly and surely. And that fact might matter (as indeed the burden/ban distinction might) to how strict scrutiny applies—and particularly to whether the State can show it has adopted the least speech-restrictive means to achieve its goal. Ashcroft explicitly recognized that point: It thought that, given the pace of technological change, the District Court might make a different decision than it had five years earlier about whether there were “less restrictive alternative[s]” to COPA. 542 U. S., at 671–672. To that extent—but to that extent only—the majority is right that Ashcroft was “self-consciously narrow and factbound.” Ante, at 26. Not, though, as to the level of scrutiny. On that question, the Court was unequivocal that because COPA was “a content-based speech restriction,” it must satisfy the strict-scrutiny test. 542 U. S., at 665; see supra, at 8–9, and n. 1. For that was a matter of basic First Amendment principle. And as this Court has understood: “Whatever the challenges of applying the Constitution to ever-advancing technology, the basic principles of the First Amendment do not vary.” Moody v. NetChoice, LLC, 603 U. S. 707, 733 (2024) (quoting Brown v. Entertainment Merchants Assn., 564 U. S. 786, 790 (2011)); see TikTok Inc. v. Garland, 604 U. S. ___, ___ (2025) (GORSUCH, J., concurring in judgment) (slip op., at 2) (“[E]ven as times and technologies change, ‘the principle of the right to free speech is always the same’” (quoting Abrams v. United States, 250 U. S. 616, 628 (1919) (Holmes, J., dissenting))).

And, as Kagan concludes, the majority is now admitting that Texas law is not the least burdensome way to reach this result, and that’s seems like a real problem for speech:

The last part of the majority’s opinion—plus some of its footnotes—shows why all this matters. In concluding that H. B. 1181 passes constitutional muster, the majority states (correctly) that under intermediate scrutiny Texas need not show it has selected the least speech-restrictive way of accomplishing its goal. See ante, at 32. Even if there were a mechanism that (1) as well or better prevented minors’ access to the covered materials and (2) imposed a lesser burden on adults’ ability to view that expression, Texas could spurn that “superior” method. Ante, at 34. Likewise, the majority—because it is applying a more forgiving standard—can ignore a host of questions about how far H. B. 1181 burdens protected expression. See Tr. of Oral Arg. 67–68. In the fine print of two footnotes, the majority declares that it has no need to explore (1) whether H. B. 1181 requires covered websites to demand age verification for all their content or only for the subset that is obscene for minors; (2) whether H. B. 1181 requires that covered speech be obscene “only to a minor (including a toddler)” or “to all minors (including 17-year-olds)”; and (3) whether H. B. 1181 permits websites to use “newer biometric methods of age verification, like face scans,” that pose fewer privacy concerns than submitting government ID and transactional data. Ante, at 17, n. 7 (emphasis in original); ante, at 34, n. 14. The majority explains that even if Texas answered each of those questions in a maximally burdensome way—requiring government ID to view speech that is protected even for children because one-third of the website’s contents are obscene for two-year-olds—H. B. 1181 can go forward. And again, that is true even if Texas has a less burdensome way of “equally or more effective[ly]” achieving its objective….

I would demand Texas show more, to ensure it is not undervaluing the interest in free expression. Texas can of course take measures to prevent minors from viewing obscene-for-children speech. But if a scheme other than H. B. 1181 can just as well accomplish that objective and better protect adults’ First Amendment freedoms, then Texas should have to adopt it (or at least demonstrate some good reason not to). A State may not care much about safeguarding adults’ access to sexually explicit speech; a State may even prefer to curtail those materials for everyone. Many reasonable people, after all, view the speech at issue here as ugly and harmful for any audience. But the First Amendment protects those sexually explicit materials, for every adult.

The only sliver of possible “good news” is that the majority opinion focuses so heavily on how intermediate scrutiny applies only because some adult content is “obscene to minors,” making it unprotected by the First Amendment, meaning that this ruling may not be as helpful to those who wish to impose age verification requirements on all social media, which would necessarily cover plenty of fully protected speech. But Thomas’s majority opinion is written in a manner that unfortunately will allow politicians around the country to relitigate those questions that had once been seen as very clear and settled law.

Kagan’s final line cuts to the heart of what Thomas’s majority has abandoned: the principle that constitutional rights don’t disappear just because the government finds the speech distasteful or because technology makes enforcement more challenging. The First Amendment was designed to protect unpopular speech—speech that makes authorities uncomfortable, speech that challenges prevailing moral views, speech that powerful people would prefer to suppress.

By creating his “partially protected” speech doctrine and blessing age verification burdens that would have been unthinkable a decade ago, Thomas has essentially told state governments: find the right procedural mechanism, and you can burden any online speech you dislike. Today it’s pornography. Tomorrow it will be political content that legislators deem “harmful to minors,” news coverage that might “disturb” children, or social media discussions that don’t align with official viewpoints.

The conservatives may have gotten their victory against online adult content, but they’ve handed every future administration—federal and state—a blueprint for dismantling digital free speech. They were so scared of nudity that they broke the Constitution. The rest of us will be living with the consequences for decades.

Filed Under: 1st amendment, age verification, burdens, clarence thomas, elena kagan, hb 1181, intermediate scrutiny, internet, ken paxton, obscenity, online speech, supreme court, texas, think of the children
Companies: free speech coalition

Democrats Should Be Stopping A Lawless President, Not Helping Censor The Internet, Honestly WTF Are They Thinking

from the missing-the-moment dept

It has long been clear that the GOP, as it is today, has a death wish for our Constitutional order, but that’s a subject for another post. What’s more relevant is that, at this point, one could easily construe that Democrats would like our Constitution to die too. In part because of how enfeebled they have so far been in resisting the lawlessness exhibited by the Article II branch of our government—although that, too, is a subject for another post. (Standing against the Vought OMB nomination, and slowing his appointment process, is good. But it’s not enough, and even though it’s a good start, it doesn’t forgive all the missed opportunities to slow the damage everyone could see coming from a mile away and provide the leadership needed to assure the public that they had their back so that the public could then, in turn, back them.)

What is a subject for this post is how, in bizarrely continuing with any sort of normal order in the face of an actual coup taking place under their noses—literally right down the street—the thing that they are trying to do with normal order is censor the Internet.

It is suicidal idiocy to do either of these things, let alone both. There is no political math that could justify Democrats not only not doing everything in their power to defend Congress’s Article I powers, but then using those powers in ways the First Amendment expressly tells them they can’t.

And yet that is what at least some Senate Democrats have gone and done by moving forward KOSMA. The bill, which is a “bi-partisan” bill initially pushed by Democratic Senator Brian Schatz (in partnership with fellow Dem Chris Murphy, along with Republicans Ted Cruz and Katie Britt) is sort of a an attempt to create a “more palatable” version of KOSA, but which is still a censorship bill at its core.

KOSMA reared its ugly head again this morning in a Senate mark-up session and passed through it easily with Democrat support and no debate at all (barely even a mention of it). And even if some of that support may have been superficially pro forma, to move the Senate along so that it could get to addressing the larger issues at hand, any support was still too much support for what this bill proposes to do.

Because what this bill intentionally proposes to do is the same extreme thing that KOSA proposed to do: censor the internet for young people. And yet Congress would attempt to take such drastic legislative steps despite, or, in the case of the GOP supporters, perhaps because of, how much of an incursion it is on the rights of so many: the teenagers themselves, all the adult Internet users who will be impacted, and all the Internet speakers who now won’t be able to speak if this law hits the books.

When kids’ parents are being fired from government jobs, their friends rounded up, their schools forced to be racist, and the democracy they were promised to grow up in fully imperiled, it is the height of lunacy for any Democrat (looking at you, Senatosr Cantwell, Schatz & Murphy, but thankfully not you, Senator Markey) to think that they are thinking of the children by doing the Internet equivalent of banning their books. Not just because there are so many bigger fish to fry, especially right now, if we really cared about their interests, but because by censoring the Internet we are taking away everyone’s tools to be able to organize and fight the true danger at our door.

To support it is just dumb, in so many ways. After all, Democrats should know better: way too many had joined the push to ban TikTok, and we saw how stupidly (and unconstitutionally) that move worked out. And when the Trump Administration is blithely ignoring laws en masse it makes no sense to give his election denying henchman Pam Bondi and her DOJ a fresh new one that they can wield against anyone they see as a threat to Trumpian power. But all that is not even the end of problems with Democratic support for this bill.

The bigger problem Democrats need to recognize, and fast, is that it teaches the electorate—assuming we ever again have another free and fair election, which is currently not looking promising—that Democrats don’t really care very much about the Constitutional order Trump is actively destroying any more than he does. Which means that, for those who would want to resist this lawless president and his open threats to the limitations on government power the Constitution prescribes, there will be no one to vote for to stand up for its defense.

Because pushing these unconstitutional laws, or even simply nudging them through a markup session, especially while seeming to ignore the grossly unconstitutional actions of the Trump Administration, tells the public that if they want to choose someone who cares about the limitations the Constitution places on government power they can’t choose Democrats to respect them any more than they can choose anyone in the GOP. But without being able to elect either party to defend our democracy it will be game over for it. And the kids will now have an even bigger problem to deal with than the Internet.

Filed Under: censorship, coup, democrats, free speech, gop, kids, kosa, kosma, think of the children

New KOSA, Same As Old KOSA, But Now With Elon’s Ignorant Endorsement

from the elon-got-rolled-again dept

The censors are making a big push on the new version of “KOSPA,” which is the Kids Online Safety Act (KOSA) merged with a problematic privacy bill (hence the “P”). Over the weekend, Senator Marsha Blackburn — who directly admitted the point of the bill was to “protect minor children from the transgender [sic] in our culture” — released an updated version of the bill which has some cosmetic changes to try to pretend they’ve fixed the very real and widespread concerns regarding how the bill can be used by whichever party is in charge to censor content they dislike.

As we’ve discussed, the main issue with the bill is the “duty of care” section, which has vague terminology that will encourage companies to simply remove any controversial content, rather than have to fight in court after-the-fact about whether the content was “harmful” to kids. On top of that, the bill heavily encourages companies to embrace problematic age verification tools that have already been shown to be a privacy disaster.

The key changes in the new version of the bill are cosmetic, rather than substantial, designed to allow supporters of the bill to insist it won’t be used for censorship, even though it will be.

First, it adds a weird “reasonable and prudent person” standard, which wasn’t in the earlier version of the bill, even though it did have “exercise reasonable care.” Did they think that “reasonable care” didn’t already require use of the judicial fiction of the “reasonable person” standard before? All this amendment does is rephrase the same vague standard in a way that ignorant people might think makes a difference:

A covered platform shall exercise reasonable care in the creation and implementation of any design feature to prevent and mitigate the following harms to minors where a reasonable and prudent person would agree that such harms were reasonably foreseeable by the covered platform and would agree that the design feature is a contributing factor to such harms

Here’s the problem with this, which many people don’t seem to understand: to figure out what a “reasonable and prudent” person would do is an after-the-fact judicial analysis. That means that any time something bad happens online that works up enforcers into a frenzy, they will go after any website they dislike where said “bad thing” was discussed, and insist that the platform should have “prevented and mitigated” the bad thing.

Then the platform would need to go to court and spend roughly $5 million across three to four years to argue that the “bad thing” which was discussed (but didn’t actually happen on the platform) wasn’t “reasonably foreseeable.”

Most platforms aren’t going to want to do that. Instead, they’ll just remove all sorts of content that might otherwise lead to such a lengthy, draining, and resource-intensive legal fight (including all the negative headlines that will go along with it).

That’s why this is a censorship bill, first and foremost.

It’s particularly galling that Democrats are supporting this, especially given that the Republicans will have full control over Congress and the FTC (which gets to enforce this law) and the leading candidate to run the FTC has already made it clear that he’s eager to extend the powers of the FTC to launch costly and punishing vindictive investigations and campaigns to push forward MAGA culture wars.

The new version of KOSA now also includes a section which is particularly stupid, which just says “oh yeah, don’t use this to violate the First Amendment,” which doesn’t actually mean it won’t violate the First Amendment:

Nothing in this section shall be construed to allow a government entity to enforce subsection (a) based upon the viewpoint of users expressed by or through any speech, expression, or information protected by the First Amendment to the Constitution of the United States.

If your bill doesn’t violate the First Amendment, you don’t actually have to put into it “hey, don’t violate the First Amendment with the bill.” If anything, that new paragraph is an admission that of course the bill can and will be used to infringe on First Amendment rights.

Along with the release of the new bill, we had ExTwitter CEO Linda Yaccarino come out saying she supported it:

Image

Yaccarino’s support for KOSPA seems misguided and hypocritical. If “protecting our children” is the top priority, why did her boss, the owner of the site, directly reinstate an account of a conspiracy theorist who shared horrific child sexual abuse material?

Of course, during a Congressional hearing earlier this year, Yaccarino had also come out in support of KOSA, though it seemed pretty clear she had no idea what it was. At the time she claimed she wanted to “make sure it accelerates” and “make sure it continues to offer community for teens that are seeking that voice.” That is someone who has no idea what they’re talking about.

But it sounds like some politicians pounced on this and used it as an opportunity to roll this naive and foolish CEO into supporting a bill that is going to cause trouble for a site as poorly managed and moderated as ExTwitter. And, of course, Elon Musk also came out in support:

Image

Except, of course, this bill does nothing to actually protect children. All it does is create massive liability for any website (including ExTwitter) that allows any content that can later be claimed to be “harmful to children.”

This really feels like a replay of that time two and a half years ago when a very naive and easily rolled Elon endorsed the EU’s Digital Services Act. Of course, then the EU used the DSA to kick off a big investigation into the company, which eventually led to Elon telling the very same official who had totally played Elon to get that endorsement to go “fuck your own face.”

It wouldn’t surprise me at all to see the same thing play out here. If KOSA does pass, with Elon’s endorsement, and then when the KOSA lawsuits start showing up against ExTwitter, you can bet he’s going to complain about how awful and unfair it all is.

Either way, it’s yet another example of how claims by Elon and Yaccarino to be “free speech absolutists” are absolute bullshit. This law is extremely censorial.

Indeed, Rand Paul, who has repeatedly explained the problems of the bill in very clear terms, was at it again this weekend, calling it “such a dire threat to our First Amendment rights….”

Image

Of course, Elon’s tweet about it has a bunch of flag-waving accounts screaming ridiculously about how ExTwitter is “the free speech site that protects kids” when neither is true.

Crucially, with very little time remaining in the current Congressional session, the window for passing KOSPA is rapidly closing. While there is a push from some Senators to get the House to vote on the bill, according to Congressional staffers I spoke with, House Republican leadership has remained cool to the idea so far. The tight timeline means that unless something changes dramatically, KOSPA is unlikely to advance before the end of the year. Because of this, opponents of the bill should focus their advocacy efforts on persuading House leaders to keep KOSPA off the agenda in the remainder of the session.

While Musk, Yaccarino, and KOSPA’s bipartisan supporters in Congress may claim to be protecting children, their misguided proposal would actually undermine free speech and privacy online without effectively improving child safety. Don’t let the “think of the children!” appeals fool you. Lawmakers must reject KOSPA’s flawed approach and defend Americans’ constitutional rights.

Filed Under: 1st amendment, duty of care, elon musk, free speech, kosa, kospa, linda yaccarino, moral panic, rand paul, think of the children
Companies: twitter, x

Senate To Kids: We’ll Listen To You When You Agree With Us On KOSA

from the listen-to-the-children...-not-those-kids dept

Apparently, Congress only “listens to the children” when they agree with what the kids are saying. As soon as some kids oppose something like KOSA, their views no longer count.

It’s no surprise given the way things were going, but the Senate today overwhelmingly passed KOSA by a 91 to 3 vote. The three no votes were from Senators Ron Wyden, Rand Paul, and Mike Lee.

There are still big questions about whether the House will follow suit, and, if so, how different their bill would be, and how the bills from the two chambers would be reconciled, but this is a step closer to KOSA becoming law, and creating all of the many problems people have been highlighting about it for years.

One thing I wanted to note, though, is how cynical the politicians supporting this have been. It’s become pretty typical for senators to roll out “example kids” as a kind of prop as for why they have to pass these bills. They will have stories about horrible things that happened, but with no clear explanation for how this bill would actually prevent that bad thing, and while totally ignoring the many other bad things the bill would cause.

In the case of KOSA, we’ve already highlighted how it would do harm to all sorts of information and tools that are used to help and protect kids. The most obvious example is LGBTQ+ kids, who often use the internet to help find their identity or to communicate with others who might feel isolated in their physical communities. Indeed, GOP support for KOSA was conditioned on the idea that the law would be used to suppress LGBTQ+ related content.

But, I did find it notable that, after all of the pro-KOSA team using kids as props to vote for the bill, how little attention was given last week to the ACLU sending hundreds of students to Congress to tell them how much KOSA would harm them.

Last week, the American Civil Liberties Union sent 300 high school students to Capitol Hill to lobby against the Kids Online Safety Act, a bill meant to protect children online.

The teenagers told the staffs of 85 lawmakers that the legislation could censor important conversations, particularly among marginalized groups like L.G.B.T.Q. communities.

“We live on the internet, and we are afraid that important information we’ve accessed all our lives will no longer be available,” said Anjali Verma, a 17-year-old rising high school senior from Bucks County, Pa., who was part of the student lobbying campaign. “Regardless of your political perspective, this looks like a censorship bill.”

But somehow, that perspective gets mostly ignored in all of this.

It would have been nice to have had an actual discussion on the policy challenges here, but from the beginning, KOSA co-sponsors Richard Blumenthal and Marsha Blackburn refused to take any of the concerns about the bill seriously. They frequently insisted that any criticism of the bill was just “big tech” talking points.

And, while they made cosmetic changes to try to appease some, the bill does not (and cannot) fix its fundamental problems. The bill is, fundamentally at its heart, a bill that is about censorship. And, while it does not directly demand censorship, the easiest and safest way to comply with the law will be to takedown whatever culture war hot topic politicians don’t like.

It’s kind of incredible that many of those who voted for the bill today were big supporters of the Missouri case against the administration (including Missouri’s Attorney General who brought that suit, Eric Schmitt, who voted in favor of KOSA today). So, apparently, according to Schmitt, governments should never try to influence how social media companies decide to take down content, but also government should have the power to take enforcement action against companies that don’t take down content the FTC decides is harmful.

There is a tremendous amount of hypocrisy here. And it would be nice if someone asked the senators voting in favor of this law why they were going against the wishes of all the kids who visited the Hill last week. After all, that’s what the senators who trotted out kids on the other side tried to do to those few senators who pointed out the flaws in this terrible law.

Filed Under: child safety, kids, kosa, mike lee, rand paul, ron wyden, senate, think of the children

Nebraska Sues TikTok For Claiming To Be Family Friendly

from the the-biggest-scourge-is-the-dancing-kids-app dept

Another day, another dumb lawsuit against TikTok.

We’ve seen school districts and parents suing TikTok on the basis of extremely weird claims of “kids used TikTok, some bad stuff happened to kids, TikTok should be liable.”

But in the past year, it seems that a bunch of state AGs have decided to sue TikTok as well. I’ve lost track, but in the last year or so, I believe Arkansas, Indiana, Utah, Kansas, and Iowa have all sued TikTok using some theory or another about how “kids like this app” must violate some law. It’s impossible to keep up with the state of all of these lawsuits, though I know that Indiana’s lawsuit was tossed out, though somehow Arkansas’ lawsuit has been allowed to proceed.

Perhaps buoyed by the success in Arkansas, Nebraska has also jumped into the state fray, suing TikTok for being “deceptive” in claiming the platform is family friendly. You can read the full complaint here or embedded below.

Like the other lawsuits, this one is a hodgepodge of moral panic and conspiracy thinking. It has all the hits: mental health! eating disorders! suicide! China!

But the key to the complaint is that by saying that it’s a family friendly app, while the moral panic narrative says “bad things happen to kids on TikTok,” it allows Nebraska’s AG to claim that this is a “deceptive” practice by the company:

Despite such documented knowledge, Defendants continually misrepresent their platform as safe and appropriate for kids and teens, marketing the app as “Family Friendly” and suitable for users 12 and up, reassuring parents, educators, policymakers, and others….

Of course, the end result of nonsense lawsuits like this is that no website will ever do anything in the future to try to be “family friendly” again, because if something bad then happens, AGs will go hog-wild in suing.

It’s so incredibly misdirected and short-sighted.

Like all the others, this is really a lawsuit about politicians not liking the content on TikTok, not liking that the kids these days like TikTok, and not liking that TikTok happens to be partially owned by a Chinese company.

But they can’t quite come out and say that, so they have to come up with some nonsense way of bringing these lawsuits. “Omg, we have to protect the children” appears to be one of the more popular ones, despite the near total lack of any evidence of any inherent harm in TikTok (or other social media) on kids.

Yes, some kids are suffering from mental health problems. And yes, there are some discussions on TikTok that are disturbing. But most of that disturbing content remains protected speech under the First Amendment. Just because people with mental health challenges use TikTok does not mean that TikTok magically causes those mental health challenges.

Like so many state AG cases, this whole thing is really about grandstanding by the AGs who hope to be elected the next governor or senator of their state.

This is yet another example of why KOSA is so dangerous. It empowers state AGs to sue websites in new ways. The state AGs have made it clear with most of these lawsuits that they don’t care what’s actually happening or what actually makes sense. They want to get headlines for taking on the big bad Chinese company that is poisoning our kids’ minds… so that they can get headlines and name recognition to line them up to run for higher office.

Filed Under: deceptive practices, family friendly, michael hilgers, nebraska, social media, think of the children
Companies: bytedance, tiktok

Tim Wu Asks Why Congress Keeps Failing To Protect Kids Online. The Answer Is That He’s Asking The Wrong Question

from the wrong-question,-dumb-answer dept

While I often disagree with Tim Wu, I like and respect him, and always find it interesting to know what he has to say. Wu was also one of the earliest folks to give me feedback on my Protocols not Platforms paper, when he attended a roundtable at Columbia University discussing early drafts of the papers in that series.

Yet, quite frequently, he perplexes me. The latest is that he’s written up a piece for The Atlantic wondering “Why Congress Keeps Failing to Protect Kids Online.”

The case for legislative action is overwhelming. It is insanity to imagine that platforms, which see children and teenagers as target markets, will fix these problems themselves. Teenagers often act self-assured, but their still-developing brains are bad at self-control and vulnerable to exploitation. Youth need stronger privacy protections against the collection and distribution of their personal information, which can be used for targeting. In addition, the platforms need to be pushed to do more to prevent young girls and boys from being connected to sexual predators, or served content promoting eating disorders, substance abuse, or suicide. And the sites need to hire more staff whose job it is to respond to families under attack.

Of course, let’s start there. The “case for legislative action” is not, in fact, overwhelming. Worse, the case that the internet harms kids is… far from overwhelming. The case that it helps many kids is, actually, pretty overwhelming. We keep highlighting this, but the actual evidence does not, in any way, suggest that social media and the internet are “harming” kids. It actually says the opposite.

Let’s go over this again, because so many people want to ignore the hard facts:

What most of these reports did note is that there is some evidence that for some very small percentage of the populace who are already dealing with other issues, those issues can be exacerbated by social media. Often, this is because the internet and social media become a crutch for those who are not receiving the help that they need elsewhere. However, far more common is that the internet is tremendously helpful for kids trying to figure out their own identity, to find people they feel comfortable around, and to learn about and discover the wider world.

So, already, the premise is problematic that the internet is inherently harmful for children. The data simply does not support that.

None of this means that we shouldn’t be open to ways to help those who really need it. Or that we shouldn’t be exploring better regulations for privacy protection (not just for kids, but for everyone). But this narrative that the internet is inherently harmful is simply not supported by the data, even as Wu and others seem to pretend it’s clear.

Wu does mention the horror stories he heard from some parents while he was in the White House. And those horror stories do exist. But most of those horror stories are similar to the rare, but still very real, horror stories facing kids offline as well. We should be looking for ways to deal with those rare but awful stories, but that doesn’t mean destroying all of the benefits of online services in the meantime.

And that brings us to the second problem with Wu’s setup here. He then pulls out the “something must be done, this is something, we should do this” approach to solving the problem he’s already misstated. In particular, he suggests that Congress should support KOSA:

A bolder approach to protecting children online sought to require that social-media platforms be safer for children, similar to what we require of other products that children use. In 2022 the most important such bill was the Kid’s Online Safety Act (KOSA), co-sponsored by Senators Richard Blumenthal of Connecticut and Marcia Blackburn of Tennessee. KOSA came directly out of the Frances Haugen hearings in the summer of 2021, and particularly the revelation that social-media sites were serving content that promoted eating disorders, suicide, and substance abuse to teenagers. In an alarming demonstration, Blumenthal revealed that his office had created a test Instagram account for a 13-year-old girl, which was, within one day, served content promoting eating disorders. (Instagram has acknowledged that this is an ongoing issue on its site.)

The KOSA bill would have imposed a general duty on platforms to prevent and mitigate harms to children, specifically those stemming from self-harm, suicide, addictive behaviors, and eating disorders. It would have forced platforms to install safeguards to protect children and tools to enable parental supervision. In my view, the most important thing the bill would have done was simply force the platforms to spend more money and more ongoing attention on protecting children, or risk serious liability.

But KOSA became a casualty of the great American culture war. The law would give parents more control over what their children do and see online, which was enough for some groups to transform the whole thing into a fight over transgender issues. Some on the right, unhelpfully, argued that the law should be used to protect children from trans-related content. That triggered civil-rights groups, who took up the cause of teenage privacy and speech rights. A joint letter condemned KOSA for “enabl[ing] parental supervision of minors’ use of [platforms]” and “cutting off another vital avenue of access to information for vulnerable youth.”

It got ugly. I recall an angry meeting in which the Eating Disorders Coalition (in favor of the law) fought with LGBTQ groups (opposed to it) in what felt like a very dark Veep episode, except with real lives at stake. Critics like Evan Greer, a digital-rights advocate, charged that attorneys general in red states could attempt to use the law to target platforms as part of a broader agenda against trans rights. That risk is exaggerated. The bill’s list of harms is specific and discrete; it does not include, say, “learning about transgenderism,” but it does provide that “nothing shall be construed [to require a platform to prevent] any minor from deliberately and independently searching for, or specifically requesting, content.” Nonetheless, the charge had a powerful resonance and was widely disseminated.

This whole thing is quite incredible. KOSA did not become a “casualty of the great American culture war.” Wu, astoundingly, downplays that both the Heritage Foundation and the bill’s own Republican co-author, Marsha Blackburn, directly said that the will would be helpful in censoring transgender information. For Wu to then say that the bill’s own co-author is wrong about what her own bill does is quite incredible.

He’s also wrong. While it is correct that the bill lists out six designated categories of harmful information that must be blocked, he leaves out that it’s the state Attorneys’ General who get to decide. And if you’ve paid attention to anything over the last decade, it’s that state AGs are inherently political, and are some of the most active “culture warriors” out there, quite willing to twist laws to their own interpretation to get headlines.

Also, even worse, as we’ve explained over and over again, laws like these that do require “mitigation” of “harmful content” around things like “eating disorders,” often fail to understand how that content works. As we’ve detailed, Instagram and Facebook made a big effort to block “eating disorder” content, and it backfired in a huge way. First, the issue wasn’t social media driving people to eating disorders, but people seeking out information on eating disorders (in other words, it was a demand side, not a supply side, problem).

So, when that content was removed, people with eating disorders still sought out the same content, and they still found it, either by using code words to get around the blocks or moving to darker, and even more problematic forums, where the people who ran them were way worse. And one result of this was that those forums lost the actually useful forms of mitigation, which include people talking to the kids and helping them get the help they need.

So here we have Tim Wu misunderstanding the problem, misunderstanding the solution he’s supporting (such that he’s supporting an idea already shown to make the problem worse), and he asks why Congress isn’t doing anything? Really?

It doesn’t help that there has been no political accountability for the members of Congress who were happy to grandstand about children online and then do nothing. No one outside a tiny bubble knows that Wicker voted for KOSA in public but helped kill it in private, or that infighting between Cantwell and Pallone helped kill children’s privacy. I know this only because I had to for my job. The press loves to cover members of Congress yelling at tech executives. But its coverage of the killing of popular bills is rare to nonexistent, in part because Congress hides its tracks. Say what you want about the Supreme Court or the president, but at least their big decisions are directly attributable to the justices or the chief executive. Congressmen like Frank Pallone or Roger Wicker don’t want to be known as the men who killed Congress’s efforts to protect children online, so we rarely find out who actually fired the bullet.

Notice that Wu never considers that the bills might be bad and didn’t deserve to move forward?

But Wu has continued to push this, including on exTwitter, in which he attacks those who have presented the many problematic aspects of KOSA, suggesting that they’re simply ignoring the harms. They’re not. Wu is ignoring the harms of what he supports.

Image

Again, while I often disagree with Wu, I usually find he argues in good faith. The argument in this piece, though, is just utter nonsense. Do better.

Filed Under: children, congress, duty of care, eating disorders, kosa, privacy, social media, think of the children, tim wu

Ohio Lawmaker Wants To Criminally Charge Minors Who Watch Porn to Protect Minors. What?

from the that-logic-doesn’t-track dept

In the latest chapter of my laziness writing on the crazy escapades of anti-porn Republicans for Techdirt, I wish to introduce you to Ohio state Rep. Steve Demetriou, who represents Bainbridge Township.

Rep. Demetriou introduced the Innocence Act, or House Bill (HB) 295, on October 11, 2023.

I wrote about the bill over at AVN.com and for the Cleveland Scene. Gustavo Turner of XBIZ also covered House Bill 295. Cleveland.com provided us with some local coverage of the bill.

Rep. Demetriou’s bill is the latest proposal by an anti-porn lawmaker who intends to require age verification to access an adult entertainment website, including Pornhub, Xvideos, or xHamster.

HB 295 features the same elements of the other so-called “copycat” age verification proposals inspired by Louisiana, which became the first in the United States to have a law requiring the adoption of age verification for users from local IP addresses to see adult content. The copycat bills have escalated in severity with Utah and Texas as two of the more severe cases. But it is a safe bet to say that Demetriou’s version takes the cake for the most severe age-gating bill.

According to the introduced bill text, House Bill 295 makes it a crime — a felony — for websites that fail to deploy age verification measures to check the ages of users from Ohio IP addresses.

Demetriou also proposes to make it a crime — a misdemeanor — for anyone who manages to get around an age-gate on a website through, say, a VPN or proxy. He explicitly mentions minors.

A press release announcing the bill states:

If this legislation is enacted, pornography distributors would be charged with a third-degree felony for failing to verify the age of a person accessing the adult content. If a minor attempts to access sexually explicit material by falsifying their identity, they would be charged with a fourth-degree misdemeanor.

Language in House Bill 295 confirms this:

Whoever violates…this section is guilty of failure to verify age of person accessing materials that are obscene or harmful to juveniles, a felony of the third degree.

Whoever violates…this section is guilty of use of false identifying information to access materials that are obscene or harmful to juveniles, a misdemeanor of the fourth degree.

Demetriou told Cleveland.com, the official web platform for The Plain Dealer newspaper, that this is a “common sense” approach to ensuring minors don’t circumvent an age gate. “Obviously, we’re not trying to target children with regards to criminal enforcement… but we want to make sure they’re protected,” Rep. Demetriou told Cleveland.com reporter Jeremy Pelzer. Demetriou said that the proposed criminal penalty targeting minors is a “deterrent” other than being a law that could compel prosecutors to pursue criminal charges against teenagers for being teenagers.

House Bill 295 was referred to the House Criminal Justice Committee and is awaiting markup. Rep. Demetriou did tell Cleveland.com that he is open to cleaning up the “kinks in this bill.”

For the Cleveland Scene, criminal defense attorney Corey Silverstein told me the bill is, obviously, a bad idea.

“I can’t think of a worse idea than charging minors with criminal offenses for viewing adult content and potentially ruining their futures,” he told me in my Scene report. “Attempting to shame and embarrass minors for viewing adult-themed content goes so far beyond common sense that it begs the question of whether the supporters of this bill gave it any thought at all.”

Civil liberties organizations are already alarmed at the potential implications of age verification laws in other parts of the country. For example, the American Civil Liberties Union and others filed an amicus brief at the Fifth Circuit Court of Appeals supporting plaintiffs in Free Speech Coalition v. Colmenero. Texas adopted an age verification law requiring pseudoscientific public health labeling for adult websites.

404 Media’s Sam Cole pointed this out with Vixen Media, a premium network of paysites, sharing the so-called public health messaging for Texas users. The Free Speech Coalition, an advocacy group for the adult industry, sued Texas with companies that own some of the most popular adult entertainment websites in the world. The ACLU said that the law in Texas overwhelmingly violates the First Amendment rights of adult sites and adult site users.

Attorney General Ken Paxton, having survived his impeachment, has substituted then-interim Attorney General Angela Colemenro. That case is now Free Speech Coalition et al. v. Paxton.

Michael McGrady covers the tech and legal sides of the online porn business, among other things. He is the legal and political contributing editor for AVN.com.

Filed Under: 1st amendment, age verification, criminalizing porn, hb 295, ohio, protect the children, steve demetriou, think of the children

Senator Brian Schatz Joins The Moral Panic With Unconstitutional Age Verification Bill

from the oh-stop-it dept

Senator Brian Schatz is one of the more thoughtful Senators we have, and he and his staff have actually spent time talking to lots of experts in trying to craft bills regarding the internet. Unfortunately, it still seems like he still falls under the seductive sway of this or that moral panic, so when the bills actually come out, they’re perhaps more thoughtfully done than the moral panic bills of his colleagues, but they’re still destructive.

His latest is… just bad. It appears to be modeled on a bunch of these age verification moral panic bills that we’ve seen in both red states and blue states, though Schatz’s bill is much closer to the red state version of these bills that is much more paternalistic and nonsensical.

His latest bill, with the obligatory “bipartisan” sponsor of Tom Cotton, the man who wanted to send our military into US cities to stop protests, is the “Protecting Kids On Social Media Act of 2023.”

You can read the full bill yourself, but the key parts are that it would push companies to use privacy intrusive age verification technologies, ban kids under 13 from using much of the internet, and give parents way more control and access to their kids’ internet usage.

Schatz tries to get around the obvious pitfalls with this… by basically handwaving them away. As even the French government has pointed out, there is no way to do age verification without violating privacy. There just isn’t. French data protection officials reviewed all the possibilities and said that literally none of them respect people’s privacy, and on top of that, it’s not clear that any of them are even that effective at age verification.

Schatz’s bill handwaves this away by basically saying “do age verification, but don’t do it in a way that violates privacy.” It’s like saying “jump out of a plane without a parachute, but just don’t die.” You’re asking the impossible.

I mean, clauses like this sound nice:

Nothing in this section shall be construed to require a social media platform to require users to provide government-issued identification for age verification.

But the fact that this was included kinda gives away the fact that basically every age verification system has to rely on government issued ID.

Similarly, it says that while sites should do age verification, they’re restricted from keeping any of the information as part of the process, but again, that raises all sorts of questions as to HOW you do that. This is “keep track of the location of this car, but don’t track where it is.” I mean… this is just wishful thinking.

The parental consent part is perhaps the most frustrating, and is a staple of the GOP state bills we’ve seen:

A social media platform shall take reasonable steps beyond merely requiring attestation, taking into account current parent or guardian relationship verification technologies and documentation, to require the affirmative consent of a parent or guardian to create an account for any individual who the social media platform knows or reasonably believes to be a minor according to the age verification process used by the platform.

Again, this is marginally better than the GOP bills in that it acknowledges sites need to “take into account” the current relationship, but that still leaves things open to mischief, especially as a “minor” in the bill is defined as anyone between the ages of 13 and 18, a period of time in which teens are discovering their own identities, and that often conflicts with their parents.

So, an LGBTQ child in a strict religious household with parents who refuse to accept their teens’ identity can block their kids entirely from joining certain online communities. That seems… really bad? And pretty clearly unconstitutional, because kids have rights too.

There’s also a prohibition on “algorithmic recommendation systems” for teens under 18. Of course, the bill ignores that reverse chronological order… is also an algorithm. So, effectively the law requires RANDOM content be shown to teens.

It also ignores that algorithms are useful in filtering out the kind of information that is inappropriate for kids. I get that there’s this weird, irrational hatred for the dreaded algorithms these days, but most algorithms are… actually helpful in better presenting appropriate content to both kids and adults. Removing that doesn’t seem helpful. It actually seems guaranteed to expose kids to even worse stuff, since they can’t algorithmically remove the inappropriate content any more.

Why would they want to do that?

Finally, the bill creates a “pilot program” for the Commerce Department to establish an official age verification program. While they frame this as being voluntary, come on. If you’re running a social media site and you’re required to do age verification under this bill (or other bills) are you going to use some random age verification offering out there, or the program set up by the federal government? Of course you’re going to go with the federal government’s, so if you were to ever get in trouble, you just say “well we were using the program the government came up with, so we shouldn’t face any liability for its failures.”

Just like the “verify but don’t violate people’s privacy” is handwaving, so is the “this pilot program is voluntary.”

This really is frustrating. Schatz has always seemed to be much more reasonable and open minded about this stuff, and its sad to see him fall prey to the moral panic about kids and the internet, even as the evidence suggests it’s mostly bullshit. I prefer Senators who legislate based on reality, not panic.

Filed Under: age verfication, algorithmic recommendations, brian schatz, kids and social media, moral panic, parental consent, think of the children, tom cotton

‘Lovejoy’s Law’ And Tech Moral Panics

from the helen-lovejoy-joy-killing dept

One of the central arguments for a recent rash of age verification laws across the country is to “protect the children.” Utah Gov. Spencer Cox called his signing of controversial social media laws a means for “protecting our kids from the harms of social media.” Arkansas Gov. Sarah Huckabee Sanders said in a press conference that her signing of the so-called Social Media Safety Act will help prevent the “massive negative impact on our kids.” Once he entered office, Sen. Josh Hawley, said that loot boxes in popular video games placed “a casino in the hands of every child in America.” Louisiana State Rep. Laurie Schlegel called her unconstitutional age verification bill in order to access pornography in the state a measure to counter how “pornography is destroying our children.” This all sounds the same.

“Won’t somebody please think of the children!” Do you all know who said that? The one and only Helen Lovejoy says this quote quite often in the fictional animated universe of The Simpsons. If we recall our elementary school ‘Simpsons’-ology, Helen is the morally crusading wife of the town reverend seeking to make the world a better place for the children. Or, at least, Helen making the world a better place for the children based on her worldview. Anything that pops Helen’s bubble of an ideal society for the small radioactive burg of Springfield, USA, is nothing more than a threat to the town’s morality. Helen leads grassroots campaigns to demonize and ban the things threatening her ideal, little bubble.

Some call this ‘Lovejoy’s law’ to further parody the over-the-top caricature of a socially-conservative moralist who believes that everything they disagree with is either the work of Satan or woke leftists. In criminology and sociology, this sort of individual could be referred to as a moral entrepreneur. Moral entrepreneurs are people who take the lead in developing and labeling a particular behavior or belief and spreading the label through the society at large. These individuals also lead in the construction of what they refer to as a criminally deviant or socially unacceptable behavior. These individuals also are those who organize at the grassroots level, like Mrs. Lovejoy, to establish and enforce a set of rules against behavior that these individuals define as criminally deviant or socially unacceptable. These fine folks perpetrate moral panic. Moral panic isn’t just a weird trope used by politicians and the punditry.

It’s a legitimate social phenomenon modeled by world-renowned criminologist Stanley Cohen. Cohen created a series of sequential levels to understand the role of “folk devils” as societal outsiders as labeled by the moral entrepreneurs who wish to do away with a particular class of individuals who engage in the identified behavior or action. Between the presence of mass media, moral entrepreneurs, a social control culture, and the general public, a moral panic can progress based on misinformed, disinformed, or outright false info surrounding the targets of the moral panic – or, as already mentioned above, the folk devils. Individuals like Spencer Cox, Sarah Huckabee Sanders, Josh Hawley, Laurie Schlegel – even the fictional Helen Lovejoy – qualify as “moral entrepreneurs” attempting to take down their targeted folk devils in big tech, legal porn, and free speech proponents advocating for speech they disagree with.

All of these individuals have axes to grind against the folk devils, and are effectively doing so by proposing, passing, and implementing laws that have much greater impact and are likely to have very little effect on resolving the crises these people have identified. This is a standard belief among the moral entrepreneurs. And, this isn’t the first time technology – social media, age verification, cell phones, video games, online legal pornography, and excessive internet use, for example – has seen moral panic lead to public policies and socio-legal remedies rise to the level of restricting basic civil liberties.

Let’s consider some brief historical examples of a governmental response to technological moral panic. The office of the U.S. Surgeon General released an evidence report in 1972 in response to concerns that televised violence was adverse to the public health of youth. The actual report, however, found violent television doesn’t have an adverse effect on the vast majority of youth in the country but may influence very small groups of youth who are predisposed to be potentially aggressive or they are already aggressive.

But, these groups are also influenced by a plethora of external and internal factors. Critics of the report attempted to use the Surgeon General’s findings as further evidence that violent television negatively impacts youth, despite the fact that the peer-review of the existing literature of the time said this risk impacts a very small portion of youth who are stuck with the predisposition and effects pointed out.

Decades later, concern over violence in video games also rose from moral panic. U.S. Supreme Court, of all institutions, responded to the political and legislative pressures to censor violent video games during the 1990s by declaring that there is no clear connection between adverse violence in real life settings and the playing of video games with violent depictions. In fact, the American Psychological Association issued a policy statement that told news outlets that they “should avoid stating explicitly or implicitly that criminal offenses were caused by violent media” such as violent video games. Some studies have even correlated a reduction in violent crime with the rise of violent video gaming.

Internet pornography has similar history. Internet pornography is one of the longest persisting moral panics, and the development of the web has made the moral panic more prominent. Whether we discuss the moral panic of online porn that led to the proposal of the Communications Decency Act of 1996 or the current attempts to restrict pornography “in the name of the children” at the state level, the moral entrepreneurs have the same belief guiding their motivations to eliminate the folk devils, or porn. They say that pornography itself is addictive. Or, pornography is somehow correlated to sex trafficking. Or, that pornography leads to increased instances of sexual violence and sexually related criminal offenses. But, as was the case for violence on television and in video games decades before, the opposite is true. Pornography addiction isn’t recognized by mainstream psychiatry. Incidence of sexual violence is much lower in jurisdictions where legally produced pornography is widely available. Online porn is regulated and there is little to no evidence to suggest that legally produced pornography is linked to trafficking.

All of these moral panics have led to some sort of political, legislative, or legal response where the moral entrepreneurs have lobbied their elected officials to push for policies that erode civil liberties and rights for people who are otherwise law-abiding, tax-paying, and productive members of society. Researchers Patrick M. Markey and Christopher J. Ferguson wrote on this issue for the American Journal of Play.

“Unfortunately, moral panics can be damaging,” Markey and Ferguson argue, adding that moral panics “can greatly damage the lives of individuals caught up in them.” Though they are writing on the panics related to violent video games, the commonality of the statements are clear regardless of the actual issue.

Markey and Ferguson also point out that researchers and organizations with a particular special interest or agenda have used the moral panic to conduct ethically and scientifically questionable research to just inflame the public’s fear even more. We’ve seen this with bogus studies on so-called porn addiction, internet addiction, and so much more. Now, we are starting to see this with “social media addiction.”

I wrote for the Salt Lake Tribune recently criticizing Utah’s social media bills. In the column, I discuss the claims that Gov. Cox made with regards to social media’s harms against minors.

Cox said that his office will conduct “research” into the harms of social media use among minors. In the tradition of the great Helen Lovejoy, the socially-conservative governor endorsed legislation that restricts access for minors based on a body of misguided and erroneous evidence. It’s this type of flawed research that gives moral entrepreneurs a supposed academic façade that further demonizes and damages the rights, welfare, and general wellbeing of the folk devils. Even if the folk devils are technology companies, there is this thing called the law of unintended consequences. Age-restriction laws on mainstream social media platforms can be perceived and rightfully registered as infringements on First Amendment rights for users of all ages. Social media regulations on age will harm modern socialization norms for youth.

Despite what the Helen Lovejoys of the world think, folk devils – regardless being tech companies or individuals – have rights. Restricting those rights through moral panic driven lawmaking is unethical.

Michael McGrady is a journalist and commentator focusing on the tech side of the online porn business, among other things.

Filed Under: age verification, josh hawley, laurie schlegel, lovejoy's law, moral panics, sarah huckabee sanders, spencer cox, think of the children

UK Child Welfare Charity Latest To Claim Encryption Does Nothing But Protect Criminals

from the recoil-in-terror-as-mustachioed-Mr.-Encryption-ties-another-child-to-the-train-t dept

Once again, it’s time to end encryption… for the children. That’s the message being put out by the UK’s National Society for the Prevention of Cruelty to Children (NSPCC). And that message is largely regurgitated word-for-word by Sky News:

In what it is calling the “biggest threat to children online”, the NSPCC (National Society for the Prevention of Cruelty to Children) says safeguards need to be introduced so police can access the data if needed.

“The proposals to extend end-to-end encryption on messaging platforms mean they are essentially putting a blindfold on itself” says Andy Burrows, head of the NSPCC’s child safety online policy.

“No longer will the social network be able to identify child abuse images that are being shared on its site, or grooming that’s taking place on its site.

“Because abusers know they will be able to operate with impunity, therefore it means that not only will current levels of abuse go largely undetected, but it’s highly likely that we’ll see more child abuse.”

Is it the “biggest threat?” That doesn’t seem likely, especially when others who are concerned about the welfare of children say encryption is actually good for kids.

Here’s ConnectSafely, a nonprofit headed by Larry Magid, who is on the board National Center for Missing and Exploited Children (NCMEC), which operates a clearinghouse for child porn images that helps law enforcement track down violators and rescue exploited children:

Some worry that encryption will make it harder for law enforcement to keep us safe, but I worry that a lack of encryption will make it harder for everyone, including children, to stay safe.

Phones and other digital devices can contain a great deal of personal information, including your current and previous locations, home address, your contacts, records of your calls and your texts, email messages and web searches. Such information, in the hands of a criminal, can not only lead to a violation of you or your child’s privacy, but safety as well. That’s why it’s important to have a strong passcode on your phone as well as a strong password on any cloud backup services… But even devices with strong passwords aren’t necessarily hacker proof, which is why it’s important that they be encrypted.

And here’s UNICEF, which has long been involved with protecting children around the world:

There is no equivocating that child sexual abuse can and is facilitated by the internet and that endto-end encryption of digital communication platforms appears to have significant drawbacks for the global effort to end the sexual abuse and exploitation of children. This includes making it more difficult to identify, investigate and prosecute such offences. Children have a right to be protected from sexual abuse and exploitation wherever it occurs, including online, and states have a duty to take steps to ensure effective protection and an effective response, including support to recover and justice.

At the same time, end-to-end encryption by default on Facebook Messenger and other digital communication platforms means that every single person, whether child or adult, will be provided with a technological shield against violations of their right to privacy and freedom of expression.

Despite this being far more nuanced than the NSPCC is willing to admit, it’s helping push legislation in the UK which would result in less child safety, rather than more. The Online Safety Bill would place burdens on communication services to prove they’re making every effort to prevent child exploitation. And “everything” means stripping all users of end-to-end encryption because this protective measure means no one but the sender and receiver can see their communications.

The bill would make tech companies responsible for “failing” to police child porn — with “failure” determined by “aggravating factors” like, you guessed it, offering end-to-end encrypted communications.

The following questions will help ascertain key risk aggravating factors (this is not an exhaustive list). Do your services:

This legislation — and the cheerleading from entities like the NSPCC — doesn’t really do anything but turn tech companies into handy villains that are far easier (and far more lucrative) to punish. There’s not going to be an influx of child porn just because communications are now encrypted. As critics of encryption have pointed out again and again, Facebook reports thousands of illegal images a year. So, a lack of encryption wasn’t preventing the distribution of illicit images. Adding encryption to the mix is unlikely to change anything but how much is reported by Facebook.

We all use encryption. (I mean, hopefully.) Having access to encrypted communications hasn’t nudged most people into engaging in criminal activity. That the same tech that protects innocent people is utilized by criminals doesn’t make the tech inherently evil. It’s the people that are evil.

As for law enforcement, it will still have plenty of options. Plenty of images will still be detected because lots of people are lazy or ignorant and use whatever’s convenient, rather than what’s actually secure. Encryption won’t exponentially increase the amount of illicit content circulating the internet. If the FBI (and others) can successfully seize and operate dark web child porn sites, it’s safe to say law enforcement will still find ways to arrest suspects and rescue children, even if encryption may make it slightly more difficult to do so.

Filed Under: children's charity, encryption, going dark, think of the children, uk
Companies: nspcc