for the children – Techdirt (original) (raw)

Age Verification Laws Are Just A Path Towards A Full Ban On Porn, Proponent Admits

from the outing-themselves-as-the-censors-they-want-to-be dept

It’s never about the children. Supporters of age verification laws, book bans, drag show bans, and abortion bans always claim they’re doing these things to protect children. But it’s always just about themselves. They want to impose their morality on other adults. That’s all there is to it.

Abortion bans are just a way to strip women of bodily autonomy. If it was really about cherishing children and new lives, these same legislators wouldn’t be routinely stripping school lunch programs of funding, introducing onerous means testing to government aid programs, and generally treating children as a presumptive drain on society.

The same goes for book bans. They claim they want to prevent children from accessing inappropriate material. But you can only prevent children from accessing it by removing it entirely from public libraries, which means even adults will no longer be able to read these books.

The laws targeting drag shows aren’t about children. They’re about punishing certain people for being the way they are — people whose mere existence seems to be considered wholly unacceptable by bigots with far too much power.

The slew of age verification laws introduced in recent years are being shot down by courts almost as swiftly as they’re enacted. And for good reason. Age verification laws are unconstitutional. And they’re certainly not being enacted to prevent children from accessing porn.

Of course, none of the people pushing this kind of legislation will ever openly admit their reasons for doing so. But they will admit it to people they think are like-minded. All it takes is a tiny bit of subterfuge to tease these admissions out of activist groups that want to control what content adults have access to — something that’s barely hidden by their “for the children” facade.

As Shawn Musgrave reports for The Intercept, a couple of people managed to coax this admission out of a former Trump official simply by pretending they were there to give his pet project a bunch of cash.

“I actually never talk about our porn agenda,” said Russell Vought, a former top Trump administration official, in late July. Vought was chatting with two men he thought were potential donors to his right-wing think tank, the Center for Renewing America.

For the last three years, Vought and the CRA have been pushing laws that require porn websites to verify their visitors are not minors, on the argument that children need to be protected from smut. Dozens of states have enacted or considered these “age verification laws,” many of them modeled on the CRA’s proposals.

[…]

But in a wide-ranging, covertly recorded conversation with two undercover operatives — a paid actor and a reporter for the British journalism nonprofit Centre for Climate Reporting — Vought let them in on a thinly veiled secret: These age verification laws are a pretext for restricting access to porn more broadly.

“Thinly veiled” is right. While it’s somewhat amusing Vought was taken in so easily and was immediately willing to say the quiet part loud when he thought cash was on the line, he’s made his antipathy towards porn exceedingly clear. As Musgrave notes in his article, Vought’s contribution to Project 2025 — a right-wing masturbatory fantasy masquerading as policy proposals should Trump take office again — almost immediately veers into the sort of territory normally only explored by dictators and autocrats who relied heavily on domestic surveillance, forced labor camps, and torture to rein in those who disagreed with their moral stances.

Pornography, manifested today in the omnipresent propagation of transgender ideology and sexualization of children, for instance, is not a political Gordian knot inextricably binding up disparate claims about free speech, property rights, sexual liberation, and child welfare. It has no claim to First Amendment protection. Its purveyors are child predators and misogynistic exploiters of women. Their product is as addictive as any illicit drug and as psychologically destructive as any crime. Pornography should be outlawed. The people who produce and distribute it should be imprisoned. Educators and public librarians who purvey it should be classed as registered sex offenders. And telecommunications and technology firms that facilitate its spread should be shuttered.

Perhaps the most surprising part of this paragraph (and, indeed, a lot of Vought’s contribution to Project 2025) is that it isn’t written in all caps with a “follow me on xTwitter” link attached. These are not the words of a hinged person. They are the opposite — the ravings of a man in desperate need of a competent re-hinging service.

And he’s wrong about everything in this paragraph, especially his assertion that pornography is not a First Amendment issue. It is. That’s why so many of these laws are getting rejected by federal courts. The rest is hyperbole that pretends it’s just bold, common sense assertions. I would like to hear more about the epidemic of porn overdoses that’s leaving children parentless and overloading our health system. And who can forget the recent killing sprees of the Sinoloa Porn Cartel, which has led to federal intervention from the Mexican government?

But the most horrifying part is Vought’s desire to imprison people for producing porn and converting librarians to registered sex offenders just because their libraries carry some content that personally offends his sensibilities.

These are the words and actions of people who strongly support fascism so long as they’re part of the ruling party. They don’t care about kids, America, democracy, or the Constitution. They want a nation of followers and the power to punish anyone who steps out of line. The Center for Renewing America is only one of several groups with the same ideology and the same censorial urges. These are dangerous people, but their ideas and policy proposals are now so common it’s almost impossible to classify it as “extremist.” There are a lot of Americans who would rather see the nation destroyed than have to, at minimum, tolerate people and ideas they don’t personally like. Their ugliness needs to be dragged out into the open as often as possible, if only to force them to confront the things they’ve actually said and done.

Filed Under: 1st amendment, age verification, censorship, for the children, free speech, porn ban, project 2025, russell vought
Companies: center for renewing america

New Jersey Is The Latest To Push A Harmful Moral Panic ‘Think Of The Kids’ Social Media Bill

from the stop-falling-for-stupid-moral-panics dept

It seems like the only “bipartisan” support around regulations and the internet these days is… over the false, widely debunked moral panic that the internet is inherently harmful to children. Study after study has said it’s simply not true. Here’s the latest list (and I have one more to write up soon):

And yet, if you talk to politicians or the media, they insist that there’s overwhelming evidence showing the opposite, that social media is inherently dangerous to children.

The latest to fall for this false moral panic is the powerful Herb Conaway, a New Jersey state representative who has been in the New Jersey Assembly since 1997. He has a bunch of moral panic related quotes. He’s claimed that the mental health epidemic among children “can be laid at the feed of social media” (despite all the studies saying otherwise). He also has claimed (again, contrary to the actual evidence) that social media “really has been horrific on the mental health and the physical health of our young people, particularly teenagers and particularly young girls.”

This is not, in fact, what the evidence shows. But it is how the moral panic has been passed around.

And so, the greatly misinformed Assemblymember has successfully been pushing Bill A5750, which requires age verification and parental consent for use of any social media platform with 5 million or more accounts worldwide. It has just passed out of committee and has a very real chance of becoming the law in New Jersey (until a federal court throws it out as unconstitutional —but we’ll get there).

Before we get to the legal problems with the bill, let’s talk about the fundamental problems.

Age verification is a privacy nightmare. This has been explained multiple times in great detail. There is no way to do age verification without putting everyone’s privacy at great risk. You don’t have to take my word for it, the French data protection agency CNIL studied every available age verification method and found that they were both unreliable and violate privacy rights.

Why would Assemblymember Conaway want to put his constituents’ privacy at risk?

Age verification only works by requiring someone to collect sensitive private data, and then hoping they can keep it safe. That’s… bad?

Next, parental verification is crazy dangerous. It can make sense in perfectly happy homes with parents who have a good relationship with their children but, tragically, that is not all homes. And if you have situations where (for example) there is an LGBTQ child in a home where the parents cannot accept their child’s identity, imagine how well that will go over.

And that’s especially true at a time when we’re seeing social media operations being created to specifically cater to marginalized groups. For example, the Trevor Project, the wonderful non-profit that helps LGBTQ youth, has their own social media network for those kids. Can you imagine how well that will work if parents of those kids had to get permission before they could make use of that site?

This law would put the most marginalized kids in society at much greater risk and cut them off from the communities and services that have been repeatedly found to help them the most.

Why?

Because of a moral panic that is not backed by the actual evidence.

The fact that this bill applies to any social media with greater than 5 million accounts means it would sweep in tons of smaller sites. Note that it’s not even active accounts or active monthly users. And it’s not just accounts in New Jersey. It’s 5 million global accounts. There are many sites that would qualify that simply could never afford to put in place age verification or parental controls, and thus the only answer will be to cut off New Jersey entirely.

So, again, the end result is cutting off marginalized and at-risk kids from the services that have repeatedly been found to be helpful.

On the legal front, these provisions are also quite clearly unconstitutional, and have been found by multiple courts to be so. Just in the past few months federal courts have rejected an Arkansas age verification bill and a California one. Neither of these were surprising results as they had been litigated in front of the Supreme Court decades ago.

The parental controls mandate is equally unconstitutional. In Brown v. EMA the Supreme Court noted that the 1st Amendment does not allow for the government “to prevent children from hearing or saying anything without their parents’ prior consent.” Children have 1st Amendment rights as well, and while they are somewhat more limited than for adults, the courts have found repeatedly that children have the right to access 1st Amendment-protected speech, and to do so without parental consent.

And, in cases like this, it’s even worse than in Brown, which was about a failed attempt by California to restrict access to violent video games. Here, the New Jersey bill attempts to limit access to all social media, not just specifically designated problematic ones. So it’s an even broader attack on the 1st Amendment rights of children than Brown was.

So, in the end we have a terribly drafted bill that will sweep in a ton of companies, even ones with limited presence in New Jersey, ordering them to invest in expensive and faulty features that have already been shown to put private info at risk, while doing so in a way that has also been shown to put the most marginalized and at-risk children at much greater risk. And all of this has already been found to be unconstitutional.

All based on a moral panic that has been widely debunked by research.

Yet the bill is sailing through the New Jersey legislature, and almost guarantees that the state of New Jersey is going to have to spend millions in taxpayer funds to defend this law in court, only to be told exactly what I’m telling them for free.

This is a bad, dangerous, unconstitutional bill.

Filed Under: a5750, age verification, for the children, herb conaway, kids, new jersey, parental consent, protect the children, social media

In One Lawsuit, Louisiana & Missouri Say Gov’t Can Never Pressure Websites To Change; In Another, They’re Looking To Pressure Websites To Change

from the a-study-in-contrasts dept

We’ve spent plenty of time over the last year or so on Missouri and Louisiana’s lawsuit against the Biden administration for apparently suggesting how sites like Meta should moderate content on their platforms. That case has had its twists and turns and is now going before the Supreme Court. I’m sure we’ll have plenty more to say on that case shortly, but last week we also saw the lawsuit where 33 states sued Meta for (what the lawsuit claims) is Meta’s failures to keep kids from using the platform.

Two of the states that signed on were… Missouri and Louisiana.

Image

So… I’m curious if there’s any way to square these two lawsuits. Because as far as I can tell, the argument is that the government should never, ever even say anything that will pressure a website to change how it handles content on its website.

But also…

It’s perfectly fine for the government to use the judicial system to… force a website to handle content in the manner that the state feels is best.

Of course, the reality is that it doesn’t matter one bit that the two lawsuits are wholly inconsistent. This has always been about culture wars and headlines, and the earlier case is about the Attorneys General in Louisiana and Missouri scoring culture war points with the dumber segments of their voting bases, while the Meta lawsuit is about scoring techlash culture points among angry parents and teachers for failing in their jobs as parents and teachers.

But, really, it seems like reporters who are covering those two AGs might want to ask them directly how they can have both of these lawsuits going on at the same time. Can the government tell websites that host 3rd party speech how to operate or not?

Filed Under: andrew bailey, biden administration, for the children, jeff landry, louisiana, missouri
Companies: meta

Google Decides To Pull Up The Ladder On The Open Internet, Pushes For Unconstitutional Regulatory Proposals

from the not-cool dept

It’s pretty much the way of the world: beyond the basic enshittification story that has been so well told over the past year or so about how companies get worse and worse as they get more and more powerful, there’s also the well known concept of successful innovative companies “pulling up the ladder” behind them, using the regulatory process to make it impossible for other companies to follow their own path to success. We’ve talked about this in the sense of political entrepreneurship, which is when the main entrepreneurial effort is not to innovate in newer and better products for customers, but rather using the political system for personal gain and to prevent competitors from havng the same opportunities.

It happens all too frequently. And it’s been happening lately with the big internet companies, which relied on the open internet to become successful, but under massive pressure from regulators (and the media), keep shooting the open internet in the back, each time they can present themselves as “supportive” of some dumb regulatory regime. Facebook did it six years ago by supporting FOSTA wholeheartedly, which was the key tide shift that made the law viable in Congress.

And, now, it appears that Google is going down that same path. There have been hints here and there, such as when it mostly gave up the fight on net neutrality six years ago. However, Google had still appeared to be active in various fights to protect an open internet.

But, last week, Google took a big step towards pulling up the open internet ladder behind it, which got almost no coverage (and what coverage it got was misleading). And, for the life of me, I don’t understand why it chose to do this now. It’s one of the dumbest policy moves I’ve seen Google make in ages, and seems like a complete unforced error.

Last Monday, Google announced “a policy framework to protect children and teens online,” which was echoed by subsidiary YouTube, which posted basically the same thing, talking about it’s “principled approach for children and teenagers.” Both of these pushed not just a “principled approach” for companies to take, but a legislative model (and I hear that they’re out pushing “model bills” across legislatures as well).

The “legislative” model is, effectively, California’s Age Appropriate Design Code. Yes, the very law that was just declared unconstitutional just a few weeks before Google basically threw its weight behind the approach. What’s funny is that many, many people have (incorrectly) believed that Google was some sort of legal mastermind behind the NetChoice lawsuits challenging California’s law and other similar laws, when the reality appears to be that Google knows full well that it can handle the requirements of the law, but smaller competitors cannot. Google likes the law. It wants more of them, apparently.

The model includes “age assurance” (which is effectively age verification, though everyone pretends it’s not), greater parental surveillance, and the compliance nightmare of “impact assessments” (we talked about this nonsense in relation to the California law). Again, for many companies this is a good idea. But just because something is a good idea for companies to do does not mean that it should be mandated by law.

But that’s exactly what Google is pushing for here, even as a law that more or less mimics its framework was just found to be unconstitutional. While cynical people will say that maybe Google is supporting these policies hoping that they will continue to be found unconstitutional, I see little evidence to support that. Instead, it really sounds like Google is fully onboard with these kinds of duty of care regulations that will harm smaller competitors, but which Google can handle just fine.

It’s pulling up the ladder behind it.

And yet, the press coverage of this focused on the fact that this was being presented as an “alternative” to a full on ban for kids under 18 to be on social media. The Verge framed this as “Google asks Congress not to ban teens from social media,” leaving out that it was Google asking Congress to basically make it impossible for any site other than the largest, richest companies to be able to allow teens on social media. Same thing with TechCrunch, which framed it as Google lobbying against age verification.

But… it’s not? It’s basically lobbying for age verification, just in the guise of “age assurance,” which is effectively “age verification, but if you’re a smaller company you can get it wrong some undefined amount of time, until someone sues you.” I mean, what’s here is not “lobbying against age verification,” it’s basically saying “here’s how to require age verification.”

A good understanding of user age can help online services offer age-appropriate experiences. That said, any method to determine the age of users across services comes with tradeoffs, such as intruding on privacy interests, requiring more data collection and use, or restricting adult users’ access to important information and services. Where required, age assurance – which can range from declaration to inference and verification – should be risk-based, preserving users’ access to information and services, and respecting their privacy. Where legislation mandates age assurance, it should do so through a workable, interoperable standard that preserves the potential for anonymous or pseudonymous experiences. It should avoid requiring collection or processing of additional personal information, treating all users like children, or impinging on the ability of adults to access information. More data-intrusive methods (such as verification with “hard identifiers” like government IDs) should be limited to high-risk services (e.g., alcohol, gambling, or pornography) or age correction. Moreover, age assurance requirements should permit online services to explore and adapt to improved technological approaches. In particular, requirements should enable new, privacy-protective ways to ensure users are at least the required age before engaging in certain activities. Finally, because age assurance technologies are novel, imperfect, and evolving, requirements should provide reasonable protection from liability for good-faith efforts to develop and implement improved solutions in this space.

Much like Facebook caving on FOSTA, this is Google caving on age verification and other “duty of care” approaches to regulating the way kids have access to the internet. It’s pulling up the ladder behind itself, knowing that it was able to grow without having to take these steps, and making sure that none of the up-and-coming challenges to Google’s position will have the same freedom to do so.

And, for what? So that Google can go to regulators and say “look, we’re not against regulations, here’s our framework”? But Google has smart policy people. They have to know how this plays out in reality. Just as with FOSTA, it completely backfired on Facebook (and the open internet). This approach will do the same.

Not only will these laws inevitably be used against the companies themselves, they’ll also be weaponized and modified by policymakers who will make them even worse and even more dangerous, all while pointing to Google’s “blessing” of this approach as an endorsement.

For years, Google had been somewhat unique in continuing to fight for the open internet long after many other companies were switching over to ladder pulling. There were hints that Google was going down this path in the past, but with this policy framework, the company has now made it clear that it has no intention of being a friend to the open internet any more.

Filed Under: aadc, age appropriate design code, age assurance, age estimation, age verification, duty of care, for the children
Companies: google

Court Says California’s Age Appropriate Design Code Is Unconstitutional (Just As We Warned)

from the the-1st-amendment-still-matters dept

Some good news! Federal Judge Beth Labson Freeman has recognized what some of us have been screaming about for over a year now: California’s Age Appropriate Design Code (AB 2273) is an unconstitutional mess that infringes on the 1st Amendment. We can add this to the pile of terrible moral panic “protect the children!” laws in Texas and Arkansas that have been similarly rejected (once again showing that the moral panic issue about the internet and children, combined with an ignorance of the 1st Amendment is neither a right nor a left issue — it’s both).

The Age Appropriate Design Code in California got almost no media attention while it was being debated or even after it passed. At times it felt like Professor Eric Goldman and myself were the only ones highlighting the problems with the bill. And there are many, many problems. Including problems that both Goldman and I told the court about (and both of us were cited in the decision).

For what it’s worth, I’ve heard through the grapevine, that one of the reasons why there was basically no media coverage was that many of the large tech companies are actually fine with the AADC, because they know that they already do most of what the law requires… and they also know full well that smaller companies will get slammed by the law’s requirements, so that’s kind of a bonus for the big tech companies.

As a reminder, the AADC was “sponsored” (in California outside organizations can sponsor bills) by an organization created and run by a British Baroness who is one of the loudest moral panic spreaders about “the kids on the internet.” Baroness Beeban Kidron has said that it’s her life’s mission to pass these kinds of laws around the world (she already helped get a similarly named bill passed in the UK, and is a driving force behind the dangerous Online Safety Act there as well). The other major sponsor of the AADC is… Common Sense Media, whose nonsense we just called out on another bill. Neither of them understand how the 1st Amendment works.

Thankfully, the judge DOES understand how the 1st Amendment works. As I noted a month and half after attending the oral arguments in person, the judge really seemed to get it. And that comes through in the opinion, which grants the preliminary injunctions, blocking the law from going into effect as likely unconstitutional under the 1st Amendment.

The judge notes, as was mentioned in the courtroom, that she’s “mindful” of the fact that the law was passed unanimously, but that doesn’t change the fact that it appears to violate the 1st Amendment. She says that protecting the privacy of people online is obviously a valid concern of the government, but that doesn’t mean you get to ignore the 1st Amendment in crafting a law to deal with it.

California insisted that nothing in the AADC regulated expression, only conduct. But, as the judge had called out at the hearing, it’s quite obvious that’s not true. And thus she finds that the law clearly regulates protected expression:

The State argues that the CAADCA’s regulation of “collection and use of children’s personal information” is akin to laws that courts have upheld as regulating economic activity, business practices, or other conduct without a significant expressive element. Opp’n 11– 12 (citations omitted). There are two problems with the State’s argument. First, none of the decisions cited by the State for this proposition involved laws that, like the CAADCA, restricted the collection and sharing of information. See id.; Rumsfeld v. Forum for Acad. & Inst. Rights, Inc., 547 U.S. 47, 66 (2006) (statute denying federal funding to educational institutions restricting military recruiting did not regulate “inherently expressive” conduct because expressive nature of act of preventing military recruitment necessitated explanatory speech); Roulette v. City of Seattle, 97 F.3d 300, 305 (9th Cir. 1996) (ordinance prohibiting sitting or lying on sidewalk did not regulate “forms of conduct integral to, or commonly associated with, expression”); Int’l Franchise, 803 F.3d at 397–98, 408 (minimum wage increase ordinance classifying franchisees as large employers “exhibit[ed] nothing that even the most vivid imagination might deem uniquely expressive”) (citation omitted); HomeAway.com, 918 F.3d at 680, 685 (ordinance regulating forms of short-term rentals was “plainly a housing and rental regulation” that “regulate[d] nonexpressive conduct—namely, booking transactions”); Am. Soc’y of Journalists & Authors, 15 F.4th at 961–62 (law governing classification of workers as employees or independent contractors “regulate[d] economic activity rather than speech”).

Second, in a decision evaluating a Vermont law restricting the sale, disclosure, and use of information about the prescribing practices of individual doctors—which pharmaceutical manufacturers used to better target their drug promotions to doctors—the Supreme Court held the law to be an unconstitutional regulation of speech, rather than conduct. Sorrell, 564 U.S. at 557, 562, 570–71. The Supreme Court noted that it had previously held the “creation and dissemination of information are speech within the meaning of the First Amendment,” 564 U.S. at 570 (citing Bartnicki v. Vopper, 532 U.S. 514, 527 (2001); Rubin v. Coors Brewing Co., 514 U.S. 476, 481 (1995); Dun & Bradstreet, Inc. v. Greenmoss Builders, Inc., 472 U.S. 749, 759 (1985) (plurality opinion)), and further held that even if the prescriber information at issue was a commodity, rather than speech, the law’s “content- and speaker-based restrictions on the availability and use of . . . identifying information” constituted a regulation of speech, id. at 570– 71; see also id. at 568 (“An individual’s right to speak is implicated when information he or she possesses is subject to ‘restraints on the way in which the information might be used’ or disseminated.”) (quoting Seattle Times Co. v. Rhinehart, 467 U.S. 20, 32 (1984)).

While California argued that Sorrell didn’t apply here because it was a different kind of information, the court notes that this argument makes no sense.

… the State is correct that Sorrell does not address any general right to collect data from individuals. In fact, the Supreme Court noted that the “capacity of technology to find and publish personal information . . . presents serious and unresolved issues with respect to personal privacy and the dignity it seeks to secure.” Sorrell, 564 U.S. at 579–80. But whether there is a general right to collect data is independent from the question of whether a law restricting the collection and sale of data regulates conduct or speech. Under Sorrell, the unequivocal answer to the latter question is that a law that—like the CAADCA—restricts the “availability and use” of information by some speakers but not others, and for some purposes but not others, is a regulation of protected expression.

And, thus, the court concludes that the restrictions in the AADC on collecting, selling, sharing, or retaining any personal information regulates speech (as a separate note, I’m curious what this also means for California’s privacy laws, on which the AADC is built… but we’ll leave that aside for now).

Separate from the restrictions on information collection, the AADC also has a bunch of mandates. Those also regulate speech:

The State contended at oral argument that the DPIA report requirement merely “requires businesses to consider how the product’s use design features, like nudging to keep a child engaged to extend the time the child is using the product” might harm children, and that the consideration of such features “has nothing to do with speech.” Tr. 19:14–20:5; see also id. at 23:5–6 (“[T]his is only assessing how your business models . . . might harm children.”). The Court is not persuaded by the State’s argument because “assessing how [a] business model[] . . . might harm children” facially requires a business to express its ideas and analysis about likely harm. It therefore appears to the Court that NetChoice is likely to succeed in its argument that the DPIA provisions, which require covered businesses to identify and disclose to the government potential risks to minors and to develop a timed plan to mitigate or eliminate the identified risks, regulate the distribution of speech and therefore trigger First Amendment scrutiny.

And she notes that the AADC pushes companies to create content moderation rules that favor the state’s moderation desires, which clearly is a 1st Amendment issue:

The CAADCA also requires a covered business to enforce its “published terms, policies, and community standards”—i.e., its content moderation policies. CAADCA § 31(a)(9). Although the State argues that the policy enforcement provision does not regulate speech because businesses are free to create their own policies, it appears to the Court that NetChoice’s position that the State has no right to enforce obligations that would essentially press private companies into service as government censors, thus violating the First Amendment by proxy, is better grounded in the relevant binding and persuasive precedent. See Mot. 11; Playboy Ent. Grp., 529 U.S. at 806 (finding statute requiring cable television operators providing channels with content deemed inappropriate for children to take measures to prevent children from viewing content was unconstitutional regulation of speech); NetChoice, LLC v. Att’y Gen., Fla. (“NetChoice v. Fla.”), 34 F.4th 1196, 1213 (11th Cir. 2022) (“When platforms choose to remove users or posts, deprioritize content in viewers’ feeds or search results, or sanction breaches of their community standards, they engage in First-Amendment-protected activity.”); Engdahl v. City of Kenosha, 317 F. Supp. 1133, 1135–36 (E.D. Wis. 1970) (holding ordinance restricting minors from viewing certain movies based on ratings provided by Motion Picture Association of America impermissibly regulated speech).

Then there’s the “age estimation” part of the bill. Similar to the cases in Arkansas and Texas around age verification, this court also recognizes the concerns, including that such a mandate will likely hinder adult access to content as well:

The State argues that “[r]equiring businesses to protect children’s privacy and data implicates neither protected speech nor expressive conduct,” and notes that the provisions “say[] nothing about content and do[] not require businesses to block any content for users of any age.” Opp’n 15. However, the materials before the Court indicate that the steps a business would need to take to sufficiently estimate the age of child users would likely prevent both children and adults from accessing certain content. See Amicus Curiae Br. of Prof. Eric Goldman (“Goldman Am. Br.”) 4–7 (explaining that age assurance methods create time delays and other barriers to entry that studies show cause users to navigate away from pages), ECF 34-1; Amicus Curiae Br. of New York Times Co. & Student Press Law Ctr. (“NYT Am. Br.”) 6 (stating age-based regulations would “almost certain[ly] [cause] news organizations and others [to] take steps to prevent those under the age of 18 from accessing online news content, features, or services”), ECF 56-1. The age estimation and privacy provisions thus appear likely to impede the “availability and use” of information and accordingly to regulate speech.

Again, the court admits that protecting kids is obviously a laudable goal, but you don’t do it by regulating speech. And the fact that California exempted non-profits from the law suggests targeting only some speakers, a big 1st Amendment no-no.

The Court is keenly aware of the myriad harms that may befall children on the internet, and it does not seek to undermine the government’s efforts to resolve internet-based “issues with respect to personal privacy and . . . dignity.” See Sorrell, 564 U.S. at 579; Def.’s Suppl. Br. 1 (“[T]he ‘serious and unresolved issues’ raised by increased data collection capacity due to technological advances remained largely unaddressed [in Sorrell].”). However, the Court is troubled by the CAADCA’s clear targeting of certain speakers—i.e., a segment of for-profit entities, but not governmental or non-profit entities—that the Act would prevent from collecting and using the information at issue. As the Supreme Court noted in Sorrell, the State’s arguments about the broad protections engendered by a challenged law are weakened by the law’s application to a narrow set of speakers. See Sorrell, 564 U.S. at 580 (“Privacy is a concept too integral to the person and a right too essential to freedom to allow its manipulation to support just those ideas the government prefers”).

Of course, once you establish that protected speech is being regulated, that’s not the end of the discussion. There are situations in which the government is allowed to regulate speech, but only if certain levels of scrutiny are met. During the oral arguments, a decent portion of the time was spent debating whether or not the AADC should have to pass strict scrutiny or just intermediate scrutiny. Strict scrutiny requires there to be both a compelling state interest in the law and that the law is narrowly tailored to achieve that result. Intermediate scrutiny says it must just be an “important government objective” (slightly less than compelling) and rather than “narrowly tailored” the law has to be substantially related to achieving that important government objective.

While I think it seemed clear that strict scrutiny should apply, here the court went with a form of intermediate scrutiny (“commercial scrutiny”) not necessarily because the judge thinks it’s the right level, but because if the law is unconstitutional even at intermediate scrutiny, then it wouldn’t survive strict scrutiny anyway. And thankfully, the AADC doesn’t even survive the lower level of scrutiny.

The court finds (as expected) that the state has a substantial interest in protecting children, but is not at all persuaded that the AADC does anything to further that interest, basically, because the law was terribly drafted. (They leave out that it had to be terribly drafted, because the intent of the bill was to pressure websites to moderate the way the state wanted, but they couldn’t come out and say that so they had to pretend that it was just about “data management.”):

Accepting the State’s statement of the harm it seeks to cure, the Court concludes that the State has not met its burden to demonstrate that the DPIA provisions in fact address the identified harm. For example, the Act does not require covered businesses to assess the potential harm of product designs—which Dr. Radesky asserts cause the harm at issue—but rather of “the risks of material detriment to children that arise from the data management practices of the business.” CAADCA § 31(a)(1)(B) (emphasis added). And more importantly, although the CAADCA requires businesses to “create a timed plan to mitigate or eliminate the risk before the online service, product, or feature is accessed by children,” id. § 31(a)(2), there is no actual requirement to adhere to such a plan. See generally id. § 31(a)(1)-(4); see also Tr. 26:9–10 (“As long as you write the plan, there is no way to be in violation.”),

Basically, California tried to tap dance around the issues, knowing it couldn’t come out and say that it was trying to regulate content moderation on websites, so it claims that it’s simply regulating “data management practices,” but the harms that the state’s own expert detailed (which drive the state’s substantial interest in passing the law) are all about the content on websites. So, then, by admitting that the law doesn’t directly require moderation (which would be clearly unconstitutional, but would address the harms described), the state effectively admitted that the AADC does not actually address the stated issue.

Because the DPIA report provisions do not require businesses to assess the potential harm of the design of digital products, services, and features, and also do not require actual mitigation of any identified risks, the State has not shown that these provisions will “in fact alleviate [the identified harms] to a material degree.” Id. The Court accordingly finds that NetChoice is likely to succeed in showing that the DPIA report provisions provide “only ineffective or remote support for the government’s purpose” and do not “directly advance” the government’s substantial interest in promoting a proactive approach to the design of digital products, services, and feature. Id. (citations omitted). NetChoice is therefore likely to succeed in showing that the DPIA report requirement does not satisfy commercial speech scrutiny.

So California got way to clever in writing the AADC and trying to wink wink nod nod its way around the 1st Amendment. By not coming out and saying the law requires moderation, it’s admitting that the law doesn’t actually address the problems it claims it’s addressing.

Ditto for the “age estimation” requirement. The issue here was that California tried to tap dance around the age estimation requirement by saying it wasn’t a requirement. It’s just that if you didn’t do age estimation, then you have to treat ALL users as if they’re children. Again, this attempt at being clever backfires by making it clear that the law would restrict access to content for adults:

Putting aside for the moment the issue of whether the government may shield children from such content—and the Court does not question that the content is in fact harmful—the Court here focuses on the logical conclusion that data and privacy protections intended to shield children from harmful content, if applied to adults, will also shield adults from that same content. That is, if a business chooses not to estimate age but instead to apply broad privacy and data protections to all consumers, it appears that the inevitable effect will be to impermissibly “reduce the adult population … to reading only what is fit for children.” Butler v. Michigan, 352 U.S. 380, 381, 383 (1957). And because such an effect would likely be, at the very least, a “substantially excessive” means of achieving greater data and privacy protections for children, see Hunt, 638 F.3d at 717 (citation omitted), NetChoice is likely to succeed in showing that the provision’s clause applying the same process to all users fails commercial speech scrutiny.

Similarly, regarding the requirement for higher levels of privacy protection, the court cites the NY TImes’ amicus brief, basically saying that this law will make many sites restrict content only to those over 18:

NetChoice has provided evidence that uncertainties as to the nature of the compliance required by the CAADCA is likely to cause at least some covered businesses to prohibit children from accessing their services and products altogether. See, e.g., NYT Am. Br. 5–6 (asserting CAADCA requirements that covered businesses consider various potential harms to children would make it “almost certain that news organizations and others will take steps to prevent those under the age of 18 from accessing online news content, features, or services”). Although the State need not show that the Act “employs . . . the least restrictive means” of advancing the substantial interest, the Court finds it likely, based on the evidence provided by NetChoice and the lack of clarity in the provision, that the provision here would serve to chill a “substantially excessive” amount of protected speech to the extent that content providers wish to reach children but choose not to in order to avoid running afoul of the CAADCA

Again and again, for each provision in the AADC, the court finds that the law can’t survive this intermediate level of scrutiny, as each part of the law seems designed to pretend to do one thing while really intending to do another, and therefore it is clearly not well targeted (nor can it be, since accurately targeting it would only make the 1st Amendment concerns more direct).

For example, take the provision that bars a website from using the personal info of a child in a way that is “materially detrimental to the physical health, mental health, or well-being of a child.” As we pointed out while the bill was being debated, this is ridiculously broad, and could conceivably cover information that a teenager finds upsetting. But that can’t be the law. And the court notes the lack of specificity here, especially given that children at different ages will react to content very differently:

The CAADCA does not define what uses of information may be considered “materially detrimental” to a child’s well-being, and it defines a “child” as a consumer under 18 years of age. See CAADCA § 30. Although there may be some uses of personal information that are objectively detrimental to children of any age, the CAADCA appears generally to contemplate a sliding scale of potential harms to children as they age. See, e.g., Def.’s Suppl. Br. 3, 4 (describing Act’s requirements for “age-appropriate” protections). But as the Third Circuit explained, requiring covered businesses to determine what is materially harmful to an “infant, a five-year old, or a person just shy of age seventeen” is not narrowly tailored.

So, again, by trying to be clever and not detailing the levels by which something can be deemed “age appropriate,” the “age appropriate design code,” fails the 1st Amendment test.

There is also an important discussion about some of the AADC requirements that would likely pressure sites to remove content that would be beneficial to “vulnerable” children:

NetChoice has provided evidence indicating that profiling and subsequent targeted content can be beneficial to minors, particularly those in vulnerable populations. For example, LGBTQ+ youth—especially those in more hostile environments who turn to the internet for community and information—may have a more difficult time finding resources regarding their personal health, gender identity, and sexual orientation. See Amicus Curiae Br. of Chamber of Progress, IP Justice, & LGBT Tech Inst. (“LGBT Tech Am. Br.”), ECF 42-1, at 12–13. Pregnant teenagers are another group of children who may benefit greatly from access to reproductive health information. Id. at 14–15. Even aside from these more vulnerable groups, the internet may provide children— like any other consumer—with information that may lead to fulfilling new interests that the consumer may not have otherwise thought to search out. The provision at issue appears likely to discard these beneficial aspects of targeted information along with harmful content such as smoking, gambling, alcohol, or extreme weight loss.

The court points out the sheer inanity of California’s defense on this point, which suggests that there’s some magical way to know how to leave available just the beneficial stuff:

The State argues that the provision is narrowly tailored to “prohibit[] profiling by default when done solely for the benefit of businesses, but allows it . . . when in the best interest of children.” Def.’s Suppl. Br. 6. But as amici point out, what is “in the best interest of children” is not an objective standard but rather a contentious topic of political debate. See LGBT Tech Am. Br. 11–14. The State further argues that children can still access any content online, such as by “actively telling a business what they want to see in a recommendations profile – e.g., nature, dance videos, LGBTQ+ supportive content, body positivity content, racial justice content, etc.” Radesky Decl. ¶ 89(b). By making this assertion, the State acknowledges that there are wanted or beneficial profile interests, but that the Act, rather than prohibiting only certain targeted information deemed harmful (which would also face First Amendment concerns), seeks to prohibit likely beneficial profiling as well. NetChoice’s evidence, which indicates that the provision would likely prevent the dissemination of a broad array of content beyond that which is targeted by the statute, defeats the State’s showing on tailoring, and the Court accordingly finds that State has not met its burden of establishing that the profiling provision directly advances the State’s interest in protecting children’s well-being. NetChoice is therefore likely to succeed in showing that the provision does not satisfy commercial speech scrutiny

This same issue comes up in the prohibition on “dark patterns,” which are not explained clearly and again run into the issue of how a site is supposed to magically know what is “materially detrimental.”

The last of the three prohibitions of CAADCA § 31(b)(7) concerns the use of dark patterns to “take any action that the business knows, or has reason to know, is materially detrimental” to a child’s well-being. The State here argues that dark patterns cause harm to children’s well-being, such as when a child recovering from an eating disorder “must both contend with dark patterns that make it difficult to unsubscribe from such content and attempt to reconfigure their data settings in the hope of preventing unsolicited content of the same nature.” Def.’s Suppl. Br. 7; see also Amicus Curiae Br. of Fairplay & Public Health Advocacy Inst. (“Fairplay Am. Br.”) 4 (noting that CAADCA “seeks to shift the paradigm for protecting children online,” including by “ensuring that children are protected from manipulative design (dark patterns), adult content, or other potentially harmful design features.”) (citation omitted), ECF 53-1. The Court is troubled by the “has reason to know” language in the Act, given the lack of objective standard regarding what content is materially detrimental to a child’s well-being. See supra, at Part III(A)(1)(a)(iv)(7). And some content that might be considered harmful to one child may be neutral at worst to another. NetChoice has provided evidence that in the face of such uncertainties about the statute’s requirements, the statute may cause covered businesses to deny children access to their platforms or content. See NYT Am. Br. 5–6. Given the other infirmities of the provision, the Court declines to wordsmith it and excise various clauses, and accordingly finds that NetChoice is likely to succeed in showing that the provision as a whole fails commercial speech scrutiny.

Given the 1st Amendment problems with the law, the court doesn’t even bother with the argument about the Dormant Commerce Clause being violated by the AADC, saying it doesn’t need to go there, and also highlighting that it’s a “thorny constitutional issue” that is in flux due to a very recent Supreme Court decision. While the judge doesn’t go into much detail on the argument that existing federal laws COPPA and Section 230 preempt California’s laws, she does say she doesn’t think that argument alone would be strong enough to get a preliminary injunction, saying the question of preemption would depend on what policies were impacted (basically saying that it might be preempted but we can’t tell until someone tries to enforce the law).

I fully expect the state to appeal and the issue will go up to the 9th Circuit. Hopefully they see the problems as clearly as the judge here did.

Filed Under: 1st amendment, aadc, ab 2273, age appropriate design code, age estimation, age verification, beeban kidron, beth labson freeman, california, for the children
Companies: netchoice

Judge Blocks Unconstitutional Book Ban Law Passed By Arkansas’ Self-Proclaimed Free Speech Warriors

from the free-speech-doesn't-mean-the-gov't-is-free-to-tell-you-to-STFU dept

The self-proclaimed free speech warriors of the Republican party have spent much of the past half-decade trying to find some way to force social media platforms to carry their often-objectionable speech. That’s what these asshats and hypocrites consider to be the real “censorship:” the actions of private companies these same people have long stated should not be forced to offer their services to people they don’t like.

In other words, no one should be forced to bake a “gay” cake. But on the other hand, private companies should be forced to publish the speech of people they’d rather not do business with.

Between the social media laws, the anti-drag laws, and everything in between that best soaks up the floor spittle generated by of the worst of the worst of their constituents, Republicans keep writing and passing laws that openly violate the Constitution. And they just keep losing in court every time a judge has a chance to take a look at the hate-blinded op-eds these legislators are trying to pass off as legitimate acts of government work.

Here it is again: performative shitheels being told by a federal court that their new favorite law is illegal.

Arkansas is temporarily blocked from enforcing a law that would have allowed criminal charges against librarians and booksellers for providing “harmful” materials to minors, a federal judge ruled Saturday.

U.S. District Judge Timothy L. Brooks issued a preliminary injunction against the law, which also would have created a new process to challenge library materials and request that they be relocated to areas not accessible by kids. The measure, signed by Republican Gov. Sarah Huckabee Sanders earlier this year, was set to take effect Aug. 1.

That’s from the Associated Press report on the latest injunction against the latest batch of free speech violations signed into law by state officials who should at least try to employ better lawyers to give these pieces of legislative shit a better pass before slashing their Hancock across a stack of papers to the applause of onlooking mouth-breathers.

[And the Associated Press should definitely start making the effort to actually post the court orders it discusses in articles, but a public document is not a limited good that can only be referenced when discussed. If the general public has access, AP has access. And — once again — it boggles the mind that in the year 2023 there are still major news agencies that refuse to embed the documents they report on.]

THAT BEING SAID… let’s move on.

The other great thing about decisions like this one [PDF] that slap down obviously unconstitutional laws is that it appears judges are as sick of this performative bullshit as millions of Americans who actually think rights should be respected and that they should, at the very least, not be treated as (perhaps temporary) doormats just because people who are supposed to serve the greater good, along with all their constituents, have instead decided to blow money on pantomime buffoonery for the appreciation of the most dull-witted of their voting base.

It opens by explaining what the law intends to do, as well as the decades of case law it intends to upend:

Section 1 of Act 372 makes librarians and booksellers the targets of potential criminal prosecution for “[f]urnishing a harmful item to a minor.” Plaintiffs contend that if Section 1 goes into effect, public librarians and bookstore owners will face a grim choice:

Arkansas already criminalizes providing obscenity to minors. But it has long maintained a safe harbor for librarians “acting within the scope of [their] regular employment duties” if prosecuted for disseminating material “that is claimed to be obscene.” See Ark. Code Ann. § 5-68-308(c). That immunity has not been questioned since the Arkansas Supreme Court found the exemption “reasonable on its face” nearly four decades ago.

So, in an effort meant to block a very specific subset of content some parents might find objectionable for some minors, the state legislature — including the state’s governor — decided it was OK to throw out the First Amendment along with four decades of case law supporting immunity for librarians. Fuck the librarians, said Governor Sanders and the bill’s supporters, as the court notes. Something that has never been a problem for decades is suddenly a concern worth threatening librarians with jail time over. (Emphasis in the original.)

In other words, the notion that a professional librarian might actually disseminate obscene material in the course of his or her regular employment duties was inconceivable to the state’s highest court. The statutory exemption protected librarians from meritless claims. Act 372 signals a fundamental change in how librarians are treated under the law.

A government-ordained attack on public libraries is almost inconceivable. The opinion quotes founding fathers who recognized the utmost importance of having free access to publications and works of literature. Well respected philanthropists (also quoted in the opinion) have repeatedly gone on record in support of publicly-funded libraries, which democratize the spread of information — something that’s even more important now that these entities often provide free internet access to people who can’t afford or readily access this undeniable essential of everyday life.

And yet, here we are, watching (along with an incredulous federal judge) a state decide it’s fully within the rights (it doesn’t actually possess) to jail librarians just because there’s a slim possibility a minor might access content these legislators have unilaterally decided (without the benefit of ruling on the disputed content itself) is de facto obscene.

It is no stretch of the imagination to foresee that these same legislators would object heavily — even up to the point of hastily erected legislation — to any reform efforts that might strip cops, prosecutors, or even legislators themselves of long-held immunities. But these same people think it’s entirely fine to do the same thing to other public employees, just because they don’t like a very small percentage of any public library’s inventory.

And there’s no need to guess what kind of content is being singled out as potentially illegal. That’s already on the record:

Plaintiff Adam Webb, Garland County Library’s Executive Director, states that his library has already received a “blanket request” to remove books from the collection due to their content and/or viewpoint, namely, “all materials with LGBTQ characters”; and he expects to see challenges to “those same books, as well as others dealing with similar themes,” made “repeatedly under Act 372.” (Doc. 22-15, ¶ 21)

Back to the court’s ongoing rejection of this reprehensible law:

The vocation of a librarian requires a commitment to freedom of speech and the celebration of diverse viewpoints unlike that found in any other profession. The librarian curates the collection of reading materials for an entire community, and in doing so, he or she reinforces the bedrock principles on which this country was founded. According to the United States Supreme Court, “Public libraries pursue the worthy missions of facilitating learning and cultural enrichment.”

[…]

The librarian’s only enemy is the censor who judges contrary opinions to be dangerous, immoral, or wrong.

The public library of the 21st century is funded and overseen by state and local governments, with the assistance of taxpayer dollars. Nonetheless, the public library is not to be mistaken for simply an arm of the state. By virtue of its mission to provide the citizenry with access to a wide array of information, viewpoints, and content, the public library is decidedly not the state’s creature; it is the people’s.

The state argues it has a “paramount interest” in preventing minors from accessing “obscene materials.” This apparently includes parents buying allegedly “obscene” materials for minors in their own home — something that definitely appears to run contrary to the rest of the law, which says any parent or person — whether or not they have an affected minor (or indeed, even reside in the state) can initiate proceedings against library employees.

Any “person affected by . . . material” in a library’s collection may “challenge the appropriateness” of that material’s inclusion in the main collection. Id. at § 5(c)(1). Material subject to challenge is not limited to sexual content. There is no definition of “appropriateness,” so any expression of ideas deemed inappropriate by the challenger is fair game. Section 5 does not require a book challenger to be a patron of the library where the challenge is made, nor does it impose a residency requirement.

This is what the new law would force librarians to do — something the court says are credible assertions that not only support ongoing litigation, but demand the court step in and block the law:

Librarians will be disinclined to risk the criminal penalty that may follow from lending or selling an older minor a book that could be considered “harmful” to a younger minor, since the new law makes no distinctions based on age and lumps “minors” into one homogenous category…

Librarians and booksellers fear exposure under Section 1 to the risk of criminal prosecution merely by allowing anyone under the age of 18 to browse the collection.

Librarians maintain that a quantity of books in their collections very likely qualify as “harmful to [younger] minors” under the law. Even if any such book is successfully identified and relocated to the “adult” section, librarians will have to closely police the browsing habits of all minors to make sure they do not stray outside the marked “children’s” or “young adult” sections of the library—a task librarians maintain is physically impossible and antithetical to the mission and purpose of public libraries.

Librarians and booksellers anticipate they will have to remove all books that could possibly be considered harmful to the youngest minors from the shelves entirely.

The librarians are right. The state is in the wrong.

Plaintiffs have established this “realistic danger.” If libraries and bookstores continue to allow individuals under the age of 18 to enter, the only way librarians and booksellers could comply with the law would be to keep minors away from any material considered obscene as to the youngest minors—in other words, any material with any amount of sexual content. This would likely impose an unnecessary and unjustified burden on any older minor’s ability to access free library books appropriate to his or her age and reading level. It is also likely that adults browsing the shelves of bookstores and libraries with their minor children would be prohibited from accessing most reading material appropriate for an adult—because the children cannot be near the same material for fear of accessing it. The breadth of this legislation and its restrictions on constitutionally protected speech are therefore unjustified.

And boom goes the injunction as the sportscasters say. Temporary for the moment, but it’s highly unlikely there’s anything the state can say to prevent this from becoming permanent. It’s a law meant to punish librarians for content in libraries certain members of this state’s government don’t like. And, considering they’re supposed to be the adults in the room, it’s amazing they feel so comfortable slapping on ideological blinkers and wandering around like children seeking to treat long-held rights as piñatas.

Filed Under: 1st amendment, arkansas, book ban, booksellers, censorship, for the children, free speech, harmful to minors, libraries, obscenity, sarah huckabee sanders

‘Pass It, Pass It, Pass It, Pass It, Pass It,’ The President Says About A Bill The GOP Says Will Be Useful To Silence LGBTQ Voices

from the you-sure-about-that-joe? dept

Well, this is not surprising, but unfortunate. With the Kids Online Safety Act (KOSA) to be debated in a Congressional hearing on Thursday, the White House had President Joe Biden come out and give a full throated endorsement of the horrible, dangerous, bill that will damage privacy and harm children.

We’ve got to hold — we’ve got to hold these platforms accountable for the national experiment they’re conducting on — on our children for profit.

Later this week, senators will debate legislation to protect kids’ privacy online, which I’ve been calling for for two years. It matters. Pass it, pass it, pass it, pass it, pass it.

I really mean it. Think about it. Do you ever get a chance to look at what your kids are looking at online?

But that’s not even remotely close to accurate about anything. Remember, the Republicans have been quite vocal about how they support KOSA because they know they can use it to suppress LGBTQ voices. They flat out said that they believe that “keeping trans content away from children is protecting kids.”

This is why so many people are up in arms about KOSA. It’s not about “protecting” kids privacy at all. It’s about giving the government more control over kids. The nature of the bill will require more data collection, not less. It will create serious 1st Amendment concerns by holding companies potentially liable if kids face harm that can be (indirectly) traced back to anything they found online.

It will create systems that will put kids who are at odds with their parents in extremely dangerous positions.

This bill is not about privacy, because it will put private data at risk.

This bill is not about kids’ safety, because it will put their safety at risk.

It is not about parental oversight, because it takes those issues out of the hands of parents.

It is not about helping kids, because it’s going to shield kids from useful information that has literally saved lives.

The Republicans seem to know all this and are embracing it for these reasons. Which leaves a big question open: why are the Democrats supporting it at all?

Filed Under: age verification, for the children, joe biden, kids online safety act, kosa, privacy

Influencers Starting To Realize How The Kids Online Safety Act (KOSA) Will Do Real Damage

from the speak-up-now dept

We’ve talked a lot about just how bad the Kids Online Safety Act (KOSA) is. Yet some people (including people who, frankly, should know better) keep trying to tell me how well meaning it is. It’s not. It’s dangerous. But it has real momentum. A massive bipartisan group of Senators are co-sponsors of the bill.

And, no matter how many times we explain that KOSA (in the name of “protecting the children”) will put kids at risk, politicians still want to pretend it’s fine. Hell, the Heritage Foundation even flat out admitted that they planned to use KOSA to censor LGBTQ+ content, in an attempt to bar children from such content. It remains incredible to me that any Democrat could support a bill when Republicans admit up front how they plan to abuse it.

But, of course, because it’s called the “Kids Online Safety Act” and you have brands like Dove (yeah, I don’t get it either) running a whole campaign in support of it, even convincing Lizzo that the bill is good, it feels like the anti-KOSA voices have been muted.

Hopefully that’s changing. A friend pointed me to a TikTok influencer, pearlmania500 (aka Alex Pearlman), with about two million followers, who has posted a fun little anti-KOSA rant, pointing out just how dangerous KOSA is.

A quick transcript of his righteous rant:

40 Senators have sponsored a bill to make sure you have to upload your driver’s license before you can use your First Amendment on the internet. That’s what they want. That’s what this bill is.

This bill is designed to make sure that they have your home address before you can actually post about ANYTHING on the internet.

The bill is Senate bill 1409. 40 Senators have sponsored it. Republicans and Democrats. This isn’t a left or right issue. This is a speech issue. And they call the bill “the Kids Online Safety Act.” Or KOSA for short.

But in reality, this is some garbage to make sure they know where you live when you post. This bill, they claim is to protect kids from restricted material on the internet, but what it’s gonna do is restrict the internet for everybody and then make you prove you’re over 18 before you can look at anything.

So maybe there’s some history of America that’s a little bit dicey, right? Maybe you want to learn a little more about the Second Amendment. Maybe there’s some controversial stuff out there. Well this bill, allows every Attorney General in every state of the union to make sure that they can sue internet companies. Like TikTok. Like Facebook. Like Twitter. That they can sue them if they aren’t making sure that everybody who looks at controversial topics are over 18.

So, all these companies are going to have to keep a database of all their users, to prove that everyone looking at all these controversial topics are over 18. So how are they gonna do that?!? They’re gonna collect your driver’s license. The bill doesn’t tell them. But the bill does make it very clear that all of these companies are going to have to prove that their users are over 18. The only way to do that is to have you upload information that proves you’re an adult before you’re allowed to touch the rest of the web.

So, if you want to make sure you have unrestricted internet… I’m not just talking about… listen, I’m not even talking about the spicy stuff… I’m talking about if you wanted to go find out gun safety information. If you wanted to find out medication information. If you wanted to find out some history that they don’t want to teach you in schools, this bill will allow Attorney Generals to sue all of these companies under the guise of “protecting kids.”

Senate bill 1409.

And it is bipartisan. My Senator. Senator Bob Casey. A Democrat. Is cosponsoring this bill. With Lindsey Graham. And Marsha Blackburn. And Dick Durbin. It’s all over the place people. They’re ALL OUT TO MAKE SURE that they can trace YOUR SHITTY POSTS on Twitter and on Facebook, back down to your home address.

And they’re doing it claiming they’re protecting the kids.

So call your Senators. Does not matter where you live. Call your fucking Senator and say “I do not want Senate bill 1409 to ever be touched or passed.”

Now, you might say that this is just one random, if well followed, TikTok influencer, but he appears to have some big fans in government. In recent months he was invited to the White House and to an event put on by Pennsylvania’s governor. He’s definitely involved in politics, and it’s good to see him speaking up about this terrible bill.

I’m still perplexed at how much support it has, but if the internet starts speaking up about how dangerous this is, maybe that will finally kill its momentum.

Filed Under: age verification, alex pearlman, for the children, kosa

Leaked Document Shows Spain Is Fully On Board With The EU Commission’s Plan To Criminalize Encryption

from the if-it-ain't-broke,-let's-break-it dept

For a few years now, the EU Commission has been pushing legislation that would undermine, if not actually criminalize, end-to-end encryption. It’s “for the children,” as they say. To prevent the distribution of CSAM (child sexual abuse material), the EU wants to mandate client-side scanning by tech companies — a move that would necessitate the removal of one end of the end-to-end encryption these companies offer to their users.

The proposal has received push back, mainly from security experts who have repeatedly pointed out how this would make everyone’s communications less secure, not just the criminals the EU wants to target. It has also received push back from the companies offering encrypted communications, all of which have informed the EU they will take their business elsewhere, rather than break their encryption.

The most significant push back (at least as far as the EU’s governing body is concerned) has come from one EU member: Germany. Germany’s government flat out told the EU government that it would not be enforcing this law mandating broken encryption, if and when it goes into force.

But that’s just Germany. Most EU nations seem fine with breaking encryption for everyone, just to target a very small percentage of the population. A document [PDF] leaked to Wired shows widespread support for the proposed mandate, with one country in particular suggesting the encryption-criminalizing proposal doesn’t go far enough.

Of the 20 EU countries represented in the document leaked to WIRED, the majority said they are in favor of some form of scanning of encrypted messages, with Spain’s position emerging as the most extreme. “Ideally, in our view, it would be desirable to legislatively prevent EU-based service providers from implementing end-to-end encryption,” Spanish representatives said in the document.

[…]

“It is shocking to me to see Spain state outright that there should be legislation prohibiting EU-based service providers from implementing end-to-end encryption,” says Riana Pfefferkorn, a research scholar at Stanford University’s Internet Observatory in California who reviewed the document at WIRED’s request. “This document has many of the hallmarks of the eternal debate over encryption.”

The document dates back to April of this year. The 20 countries offering at least partial support for undermining encryption were unwilling to explain to Wired why they felt this way. Only one country supplied a comment, and that comment — along with its comments in the leaked document — suggest it, too, at some point may be providing significant push back of its own.

WIRED asked all 20 member states whose views are included in the document for comment. None denied its veracity, and Estonia confirmed that its position was compiled by experts working within related fields and at various ministries.

Estonia’s responses to the EU’s questions make it clear it thinks the proposal is, at best, half-baked. This answer in particular shows Estonia’s government calling out the EU for creating a proposal that mandates companies break other existing EU data privacy laws:

[EU]: Are you in favour of including audio communications in the scope of the CSA proposal, or would you rather exclude it as in Regulation (EU) 2021/1232?

We are a bit reserved and concerned with the potential inclusion of “audio communication”. For us the question is about what communication are we discussing – FB voice messages or direct special services or applications offering only voice communication service, including encrypted ones? Secondly the initial proposal and assessment (Interinstitutional File: 2022101 55(COD) ) focused mainly on visual material and sites and web links – indeed, this is the most pressing issue here. Audio communication was not included in that with a big attention scope.

This does not mean that Estonia doesn’t think grooming etc. criminal activities are not important. They are and we support any action fighting against this issue! We also want to remind, that EUCJ has forbidden the state regulation retention obligation of metadata by service providers. Now, we create a regulation which forces service providers to carry out mass interception of content data, which, as we want to emphasise, was the counter-argument regarding the metadata retention in the court. This is something we don’t want to do in Europe. This may also create more friction with the EU Parliament.

More directly, the Estonian Ministry of Economic Affairs and Communications says this:

Estonia does not support the possibility of creating backdoors for end-to-end encryption solutions.

That’s what happens when you actually talk to “experts working within related fields,” rather than just legislators who believe any sacrifice “for the children” is acceptable, as long as they are not expected to sacrifice anything themselves.

But the rest of the document is a mixed bag, with more countries showing support for some sort of direct regulation of E2EE. This is disappointing, but it’s too be expected when loaded language is used to create the proposal and held over the heads of EU member countries — language that suggests that if they’re for protecting encryption, they’re also for the continued sexual exploitation of children. That’s the kind of peer pressure that’s difficult to shrug off. But even if some countries (looking at you, Spain) are just looking for excuses to start breaking encryption, others are publicly demonstrating they won’t be shamed into passing a bad law that makes millions of residents’ communications less secure.

Filed Under: csam, encryption, end-to-end encryption, estonia, eu, eu commission, for the children, germany, protect the children, spain

Arkansas: No Need To Age Verify Kids Working In Meat Processing Plants, But We Must Age Verify Kids Online

from the priorities,-people dept

As we’ve been covering, there are a slew of laws across the country (and around the globe!) looking to required websites to “age verify” their visitors. And, it seems to be something that has support from all around the political spectrum, as “protect the children” moral panics know no political boundaries.

Just recently Utah passed its age verification (and more) anti-social media bills (which the governor is expected to sign shortly). Ohio has a plan in the works as well. And, of course, here in California, such a bill was signed into law last year, though is now being challenged in court. There are many states working on similar bills as well. Indeed, at this point, it’s more likely than not that a state is exploring such a bill, even as it seems likely to be unconstitutional.

Arkansas is one such state. SB66 is a bill “to create the protection of minors from distribution of harmful material” and “to establish liability for the publication or distribution of material harmful to minors on the internet” and “to require reasonable age verification.” In other words, the same old unconstitutional garbage that (1) has already been rejected by the Supreme Court and (2) is pre-empted under Section 230.

While the whole law is garbage, let’s just focus in on the age verification part. It would require that any commercial entity “shall use a reasonable age verification method before allowing access to a website that contains a substantial portion of material that is harmful to minors.” The bill has a longer definition of what “material harmful to minors” would be, and it includes “nipple of the female breast.” Also the “touching, caressing, or fondling of nipples, breasts, buttocks, the anus, or genitals.” Hmm.

Anyway.

In other news… in the very same state of Arkansas, governor Sarah Huckabee Sanders has signed into law a different bill, HB1410, which revised Arkansas labor laws to remove age verification for those under 16. The governor claimed in a statement that the law “was an arbitrary burden on parents to get permission from the government for their child to get a job.”

This comes less than a month after meat packing company Packers Sanitation Services, which has operations in Arkansas, was fined $1.5 million for illegally employing “at least 102 children to clean 13 meatpacking plants on overnight shifts,” some of whom were… in Arkansas. That company was found to employ kids as young as 13, who had their skin “burned and blistered.”

So, you know, seems like a good time to roll back the laws that try to make sure companies aren’t doing that sort of thing in Arkansas.

But, equally, seems like an odd time to focus on making sure those very same kids, who will no longer have to verify their ages to work such jobs… will have to verify their age to check out any website where they might encounter a female nipple.

Too young to see a nipple, but never too young to be put to labor cleaning a meatpacking plant where you can have your own skin burned and blistered.

The Arkansas way.

Somewhat incredibly, the bills share two cosponsors. Representative Wayne Long and Senator Joshua Bryant want to make sure it’s more difficult for kids to use the internet, but easier to have those kids work dangerous jobs in meatpacking plants.

Seems healthy.

Filed Under: age verification, arkansas, child labor, for the children, harmful content, internet, joshua bryant, nipples, protect the children, sarah huckabee sanders, wayne long