protect the children – Techdirt (original) (raw)

Hillary Clinton’s Continued Confusion About Section 230 Highlights Need For Basic Tech Literacy Among Politicians

from the everything-you-said-is-exactly-backwards dept

Hillary Clinton has no clue how Section 230 works. She seems to think that repealing it will make websites more likely to remove misinformation (which is backwards). But what law do we need repealed to stop Clinton from spreading misinformation about Section 230 (and social media)?

In April, we wrote about some comments from Hillary Clinton regarding Section 230, showing that she was incredibly confused about what it does and how it works, to the point of actively spreading misinformation about how the law works. It appears that since then, either no one has told her she’s wrong or (worse) they have and she refuses to understand why.

She recently appeared on Michael Smerconish’s show to talk about her book and continued to push misinformation about Section 230. Indeed, this time it was even worse and more wrong than earlier this year. Smerconish was equally confused, though I’d never heard of the guy until now, but someone should get him to stop pushing utter misinformation as well. If you want to watch their interaction, it’s here:

Clinton kicks off this part of the discussion by saying that she believes kids are addicted to smartphones and social media. This is not what the evidence shows, but who needs evidence when you have feelings? Yes, there are a few high profile politicians and pundits who claim this is true, but the actual evidence is much more complex and nuanced, showing that many kids (especially LGBTQ kids) get real benefit from the ability to connect in this way, and only a very small percentage struggle with it (which often appears to be a sign of trouble elsewhere in their lives).

Smerconish responds by thanking Clinton for citing Jonathan Haidt as an expert, which is laughable because he’s not. Basically all of the actual experts and all of the actual science in the space disagree with Haidt and say he is misrepresenting the evidence regarding social media and kids. Even more ridiculously, Smerconish claims “I’m shocked that no person — no Republican, no Democrat — is championing this issue,” referring to kids and social media.

Has Smerconish been living in a cave? For like two years now, there’s been an ongoing baseless moral panic on this very issue (driven by non-experts like Haidt). KOSA (a terrible bill that will lead to real harm for LGBTQ youth in particular, but which is pitched as a “save the kids online” bill) passed the Senate with only three no votes and over 70 co-sponsors.

The idea that no one is championing this idea is so out of touch with reality that it makes me realize whoever Smerconish is, he seems to not know what’s going on. So why should anyone pay attention to what he has to say?

Smerconish then does the “out of touch old man” routine, claiming that we need to get kids to be more social like their parents and grandparents. Dude. Many kids do socialize today, and they do some of it with their phones. Yes, that’s different from when you were a kid, but that doesn’t necessarily make it worse.

There truly is nothing worse than two rich, out-of-touch people insisting that “the kids these days” are doing stuff wrong because it’s different from back in the day. How obnoxious. And wrong.

But, on to Clinton’s comments. In response to this, she pushed for getting rid of Section 230 again, but it appears that she completely misunderstands Section 230 (to the point of literally getting it backwards):

We need national action and sadly, our Congress has been dysfunctional when it comes to addressing these threats to our children. So you’re absolutely right. This should be at the top of every legislative, political agenda. There should be a lot of things done. We should be, in my view, repealing something called section 230, which gave platforms on the internet immunity because they were thought to be just pass-throughs, that they shouldn’t be judged for the content that is posted. But we now know that that was an overly simple view, that if the platforms, whether it’s Facebook or Twitter X or Instagram or TikTok, whatever they are, if they don’t moderate and monitor the content we lose total control and it’s not just the social and psychological effects it’s real harm, it’s child porn and threats of violence, things that are terribly dangerous.

So, I couldn’t agree with you more. We need to remove the immunity for liability and we need to have guardrails. We need regulation. We’ve conducted this big experiment on ourselves and particularly our kids, and I think the evidence is in that we’ve got to do more….

Okay, so first, she has Section 230 literally backwards. Somehow, she seems to have internalized Ted Cruz’s lying about Section 230 and assumed it’s accurate.

Section 230 was not “because [websites] were thought to be just pass-throughs.” It’s literally the opposite of that. Surely Clinton knows Senator Ron Wyden, who wrote the law, and could tell her she’s wrong? It was written to encourage moderation by removing liability for websites’ moderation choices.

There is still liability for content online. It’s just on whoever created the content.

If you repeal Section 230, you get less moderation not more, because you are now creating liability for moderation choices. If you create more liability for something you get less of it. Where Clinton is woefully confused is she seems to think that repealing Section 230 would create liability for misinformation. It would not. The First Amendment protects that. It would just create liability for moderation. Meaning you’d get less of that and more misinformation.

The First Amendment (which, surely, Clinton is familiar with?) requires there to be actual knowledge of violative content for a distributor to be liable. All repealing Section 230 would do is encourage websites to look the other way to avoid liability.

Clinton is literally misinforming the public, getting the law exactly backwards, and demanding a regulatory move that would do exactly the opposite of what she claims it would, based on misunderstanding misrepresented research.

It’s like an entire seven-layer cake of misinformation.

So, please, if anyone reading this has any ability to talk to Hillary Clinton, please, please, please get her to talk to some actual experts, whether they’re experts on Section 230 or on the very nuanced and complex issues regarding social media and mental health, as she seems to have fallen down a rabbit hole of moral panic misinformation, combined with nonsense GOP talking points, and is pushing the worst solution possible. Her current stance will do real harm to children.

Separately, I’ll just note that Clinton’s confusion is also (stupidly) leading to even more confusion and misleading reporting. Fox News took Clinton’s comments and published an article misleadingly suggesting she wants to force websites to moderate political content “or we lose control.” In the CNN interview, she was clearly only talking about child safety issues, which the MAGA world seems aligned with her on. They also want to “repeal” Section 230 and are pushing KOSA because they think it’s necessary to “stop the transgender.”

So, maybe, just maybe, Clinton would be better served by getting a clue on how all of this works, so she stops pushing nonsense that actually supports the MAGA world’s position on LGBTQ content and spewing utter misinformation?

Filed Under: 1st amendment, addiction, free speech, hillary clinton, intermediary liability, jonathan haidt, kosa, michael smerconish, protect the children, section 230, social media

KOSA Advances Out Of House Committee, But Cracks Are Showing

from the it's-a-bad-bill,-stop-it dept

This morning, the House Energy and Commerce Committee held a pretty long markup about KOSA, COPPA 2.0, and other bills. The quick summary is that both of those bills passed out of committee and could be taken to the House floor this session.

The longer version, though, is that cracks in the coalition pushing these bills are showing. It’s not clear that there’s a comprehensive vision that gets KOSA over the finish line, and that’s good for protecting kids, protecting privacy, and protecting speech. Because all of these versions of KOSA are an attack on all three of those things (while pretending not to be).

As we’ve described, the new versions discussed today are different from the version that passed the Senate earlier this year. The House leadership doesn’t much like the Senate version, and the new versions don’t seem likely to fix that. Any changes made to shore up support of House leadership seems likely to lose plenty of Democrats.

And while backers of the bills complained that they were voting on a “weakened version,” they also admitted that there were concerns about “unintended consequences” creeping into the bill. This statement from Rep. Kathy Castor, one of the key backers of the bill, is the sound of someone who knows they have a shitty bill on their hands, but wants to pass it anyway:

Rep. Kathy Castor (D-Fla.), the Democratic co-lead on Bilirakis’s House version, acknowledged the version is a “weakened version” from what passed in the Senate, but urged her colleagues to advance the bill with hopes the language will be changed before going to the full House.

“We can’t allow unintended consequences to creep in, because there were politics played with KOSA here at the eleventh hour,” she said. “I think it’s important today to move it forward with the promise and acknowledgment that we…I don’t know that I could support this version if it comes to the House floor in this manner, but I trust Chair [Cathy] McMorris Rodgers [R-Wash.] and her leadership.”

Throughout the hearing, certain concerns were raised about the bills. It sounds as though many offices, both Republican and Democrat, are concerned about how they will allow the opposing party tremendous leeway in potentially pressuring internet companies to take down speech they dislike.

Thus, Democrats are realizing that KOSA is a bill targeting LGBTQ and abortion info, whereas some Republicans are now calling out how it could be used to pressure companies to remove pro-life content and/or religious content. With folks on both ends realizing that at its heart, KOSA is a censorship bill and will cause problems when “the other side” is in power, hopefully the bill won’t have enough momentum to keep going.

It’s almost amusing to see the opposing sides highlighting how their opposites would abuse the bill. The left-leaning Chamber of Progress is calling out how the Heritage Foundation would use KOSA to censor abortion info:

I write to convey my concern that the MAGA think tank Heritage Foundation - sponsor of the extreme Project 2025 agenda for Donald Trump's second term - is promoting the Kids Online Safety Act (KOSA) as a means of further imperiling reproductive rights. The Heritage Foundation is circulating the attached document to congressional
Republicans in support of KOSA, addressing "Responses to Concerns, Myth v. Fact, and Proposed Changes."

Meanwhile, some House Republicans are warning their colleagues of the reverse happening:

Preventing Pro-Life Groups from Maintaining Records Necessary to Provide Ongoing Support: KOSA's data minimization requirements could be used to argue that pro-life groups are collecting or retaining more personal information than necessary, making them vulnerable to lawsuits (Section 104). Denying Ability to Use Data to Help Women Seeking Crisis Center Help:
• The individual control provisions could be used to demand that pro-life groups delete or refrain from using personal information of women who have sought their assistance, even if that information is crucial for providing ongoing support and resources (Section 104).

The FTC, under a Democratic administration, could prioritize enforcement actions against pro-life groups, alleging violations of KOSA's requirements related to data minimization, transparency, or individual control over personal data. This selective enforcement could place a significant burden on these organizations, even if they are acting in good faith (Section 110). Democratic administrations can leverage KOSA's "data broker registration requirements" to collect information about pro-life groups that engage in data-related activities, using this information to target these organizations for additional scrutiny or enforcement actions (Section 106). Democratic administration will fill the Kids Online Safety Council with pro-abortion "civil society" and bureaucratic activists to decide what content is and is not dangerous to individuals (Section 111).

If both parties are worrying about how the other side might use KOSA to censor content, perhaps everyone can meet in the middle and admit that this is an unconstitutional, First Amendment-ignoring censorship bill, and dump the whole thing in the trash?

Filed Under: 1st amendment, censorship, coppa 2.0, democrats, free speech, kathy castor, kosa, privacy, protect the children, republicans

Utah’s ‘Protect The Kids Online!’ Law Rejected By Court

from the utah-does-it-again dept

Over the last few years, politicians in Utah have been itching to pass terrible internet legislation. Some of you may forget that in the earlier part of the century, Utah became somewhat famous for passing absolutely terrible internet laws that the courts then had to clean up. In the last few years, it’s felt like other states have passed Utah, and maybe its lawmakers were getting a bit jealous in losing their “we pass batshit crazy unconstitutional internet laws” crown.

So, two years ago, they started pushing a new round of such laws. Even as they were slamming social media as dangerous and evil, Utah Governor Spencer Cox proudly signed the new law, streaming it on all the social media sites he insisted were dangerous. When Utah was sued by NetChoice, the state realized that the original law was going to get laughed out of court and they asked for a do-over, promising that they were going to repeal and replace the law with something better. The new law changed basically nothing, though, and an updated lawsuit (again by NetChoice) was filed.

The law required social media companies to engage in “age assurance” (which is just a friendlier name for age verification, but still a privacy nightmare) and then restrict access to certain types of content and features for “minor accounts.”

Cox also somewhat famously got into a fight on ExTwitter with First Amendment lawyer Ari Cohn. When Cohn pointed out that the law clearly violates the First Amendment, Cox insisted: “Can’t wait to fight this lawsuit. You are wrong and I’m excited to prove it.” When Cohn continued to point out the law’s flaws, Cox responded “See you in court.”

The Twitter exchange between Cohn and Cox as described above with Cox concluding "see you in court."

In case you’re wondering how the lawsuit is going, last night Ari got to post an update:

Ari Cohn quote tweeting Cox's "See you in court" tweet, and saying "ope" with a screenshot of the conclusion from the court enjoining the law as unconstitutional.

The law is enjoined. The court found it to likely be unconstitutional, just as Ari and plenty of other First Amendment experts expected. This case has been a bit of a roller coaster, though. A month and a half ago, the court said that Section 230 preemption did not apply to the case. The analysis on that made no sense. As we just saw, a court in Texas threw out a very similar law and said that since it tried to limit how sites could moderate content, it was preempted by Section 230. But, for a bunch of dumb reasons, the judge here, Robert Shelby, argued that the law wasn’t actually trying to impact content moderation (even though it clearly was).

But, that was only part of the case. The latest ruling found that the law almost certainly violates the First Amendment anyway:

NetChoice’s argument is persuasive. As a preliminary matter, there is no dispute the Act implicates social media companies’ First Amendment rights. The speech at issue in this case— the speech social media companies engage in when they make decisions about how to construct and operate their platforms—is protected speech. The Supreme Court has long held that “[a]n entity ‘exercis[ing] editorial discretion in the selection and presentation’ of content is ‘engage[d] in speech activity’” protected by the First Amendment. And this July, in Moody v. NetChoice, LLC, the Court affirmed these First Amendment principles “do not go on leave when social media are involved.” Indeed, the Court reasoned that in “making millions of . . . decisions each day” about “what third-party speech to display and how to display it,” social media companies “produce their own distinctive compilations of expression.”

Furthermore, following on the Supreme Court’s ruling earlier this year in Moody about whether or not the entire law can be struck down on a “facial” challenge, the court says “yes” (this issue has recently limited similar rulings in Texas and California):

NetChoice has shown it is substantially likely to succeed on its claim the Act has “no constitutionally permissible application” because it imposes content-based restrictions on social media companies’ speech, such restrictions require Defendants to show the Act satisfies strict scrutiny, and Defendants have failed to do so.

Utah tries to argue that this law is not about speech and content, but rather about conduct and “structure,” as California did in challenges to its “kids code” law. The court is not buying it:

Defendants respond that the Definition contemplates a social media service’s “structure, not subject matter.” However, Defendants’ argument emphasizes the elements of the Central Coverage Definition that relate to “registering accounts, connecting accounts, [and] displaying user-generated content” while ignoring the “interact socially” requirement. And unlike the premises-based distinction at issue in City of Austin, the social interaction-based distinction does not appear designed to inform the application of otherwise content-neutral restrictions. It is a distinction that singles out social media companies based on the “social” subject matter “of the material [they] disseminate[].” Or as Defendants put it, companies offering services “where interactive, immersive, social interaction is the whole point.”

The court notes that Utah seems to misunderstand the issue, and finds the idea that this law is content neutral to be laughable:

Defendants also respond that the Central Coverage Definition is content neutral because it does not prevent “minor account holders and other users they connect with [from] discuss[ing] any topic they wish.” But in this respect, Defendants appear to misunderstand the essential nature of NetChoice’s position. The foundation of NetChoice’s First Amendment challenge is not that the Central Coverage Definition restricts minor social media users’ ability to, for example, share political opinions. Rather, the focus of NetChoice’s challenge is that the Central Coverage Definition restricts social media companies’ abilities to collage user-generated speech into their “own distinctive compilation[s] of expression.”

Moreover, because NetChoice has shown the Central Coverage Definition facially distinguishes between “social” speech and other forms of speech, it is substantially likely the Definition is content based and the court need not consider whether NetChoice has “point[ed] to any message with which the State has expressed disagreement through enactment of the Act.”

Given all that, strict scrutiny applies, and there’s no way this law passes strict scrutiny. The first prong of the test is whether or not there’s a compelling state interest in passing such a law. And even though it’s about the moral panic of kids on the internet, the court says there’s a higher bar here. Because we’ve done this before, with California trying to regulate video games, which the Supreme Court struck down fourteen years ago:

To satisfy this exacting standard, Defendants must “specifically identify an ‘actual problem’ in need of solving.” In Brown v. Entertainment Merchants Association, for example, the Supreme Court held California failed to demonstrate a compelling government interest in protecting minors from violent video games because it lacked evidence showing a causal “connection between exposure to violent video games and harmful effects on children.” Reviewing psychological studies California cited in defense of its position, the Court reasoned research “show[ed] at best some correlation between exposure to violent entertainment” and “real-world effects.” This “ambiguous proof” did not establish violent videogames were such a problem that it was appropriate for California to infringe on its citizens’ First Amendment rights. Likewise, the Court rejected the notion that California had a compelling interest in “aiding parental authority.” The Court reasoned the state’s assertion ran contrary to the “rule that ‘only in relatively narrow and well-defined circumstances may government bar public dissemination of protected materials to [minors].’”

While there’s lots of screaming and yelling about how social media is bad for kids’ mental health, as we directly told Governor Cox, the evidence just doesn’t support the claim. The court seems to recognize that the claims are a lot of hot air as well. Indeed, Utah submitted the Surgeon General’s report as “proof,” which apparently they didn’t even read. As we noted, contrary to the media reporting on that report, it contained a very nuanced analysis that does not show any causal harms to kids from social media.

The judge absolutely noticed that.

First, though the court is sensitive to the mental health challenges many young people face, Defendants have not provided evidence establishing a clear, causal relationship between minors’ social media use and negative mental health impacts. It may very well be the case, as Defendants allege, that social media use is associated with serious mental health concerns including depression, anxiety, eating disorders, poor sleep, online harassment, low self-esteem, feelings of exclusion, and attention issues. But the record before the court contains only one report to that effect, and that report—a 2023 United States Surgeon General Advisory titled Social Media and Youth Mental Health—offers a much more nuanced view of the link between social media use and negative mental health impacts than that advanced by Defendants. For example, the Advisory affirms there are “ample indicators that social media can . . . have a profound risk of harm to the mental health and well-being of children and adolescents,” while emphasizing “robust independent safety analyses of the impact of social media on youth have not yet been conducted.” Likewise, the Advisory observes there is “broad agreement among the scientific community that social media has the potential to both benefit and harm children and adolescents,” depending on “their individual strengths and vulnerabilities, and . . . cultural, historical, and socio-economic factors.” The Advisory suggests social media can benefit minors by “providing positive community and connection with others who share identities, abilities, and interest,” “provid[ing] access to important information and creat[ing] a space for self-expression,” “promoting help-seeking behaviors[,] and serving as a gateway to initiating mental health care.”

The court is also not at all impressed by a declaration Utah provided by Jean Twenge, who is Jonathan Haidt’s partner-in-crime in pushing the baseless moral panic narrative about kids and social media.

Moreover, a review of Dr. Twenge’s Declaration suggests the majority of the reports she cites show only a correlative relationship between social media use and negative mental health impacts. Insofar as those reports support a causal relationship, Dr. Twenge’s Declaration suggests the nature of that relationship is limited to certain populations, such as teen girls, or certain mental health concerns, such as body image.

Then the court points out (thank you!) that kids have First Amendment rights too:

Second, Defendants’ position that the Act serves to protect uninformed minors from the “risks involved in providing personal information to social media companies and other users” ignores the basic First Amendment principle that “minors are entitled to a significant measure of First Amendment Protection.” The personal information a minor might choose to share on a social media service—the content they generate—is fundamentally their speech. And the Defendants may not justify an intrusion on the First Amendment rights of NetChoice’s members with, what amounts to, an intrusion on the constitutional rights of its members’ users…

Furthermore, Utah fails to meet the second prong of strict scrutiny, that the law be “narrowly tailored.” Because it’s not:

To begin, Defendants have not shown the Act is the least restrictive option for the State to accomplish its goals because they have not shown existing parental controls are an inadequate alternative to the Act. While Defendants present evidence suggesting parental controls are not in widespread use, their evidence does not establish parental tools are deficient. It only demonstrates parents are unaware of parental controls, do not know how to use parental controls, or simply do not care to use parental controls. Moreover, Defendants do not indicate the State has tried, or even considered, promoting “the diverse supervisory technologies that are widely available” as an alternative to the Act. The court is not unaware of young people’s technological prowess and potential to circumvent parental controls. But parents “control[] whether their minor children have access to Internet-connected devices in the first place,” and Defendants have not shown minors are so capable of evading parental controls that they are an insufficient alternative to the State infringing on protected speech.

Also, this:

Defendants do not offer any evidence that requiring social media companies to compel minors to push “play,” hit “next,” and log in for updates will meaningfully reduce the amount of time they spend on social media platforms. Nor do Defendants offer any evidence that these specific measures will alter the status quo to such an extent that mental health outcomes will improve and personal privacy risks will decrease

The court also points out that the law targets social media only, and not streaming or sports apps, but if it was truly harmful, then the law would have to target all of those other apps as well. Utah tried to claim that social media is somehow special and different than those other apps, but the judge notes that they provide no actual evidence in support of this claim.

But Defendants simply do not offer any evidence to support this distinction, and they only compare social media services to “entertainment services.” They do not account for the wider universe of platforms that utilize the features they take issue with, such as news sites and search engines. Accordingly, the Act’s regulatory scope “raises seriously doubts” about whether the Act actually advances the State’s purported interests.

The court also calls out that NetChoice member Dreamwidth, run by the trust & safety expert known best online as @rahaeli, proves how stupid and mistargeted this law is:

Finally, Defendants have not shown the Act is not seriously overinclusive, restricting more constitutionally protected speech than necessary to achieve the State’s goals. Specifically, Defendants have not identified why the Act’s scope is not constrained to social media platforms with significant populations of minor users, or social media platforms that use the addictive features fundamental to Defendants’ well-being and privacy concerns. NetChoice member Dreamwidth, “an open source social networking, content management, and personal publishing website,” provides a useful illustration of this disconnect. Although Dreamwidth fits the Central Coverage Definition’s concept of a “social media service,” Dreamwidth is distinguishable in form and purpose from the likes of traditional social media platforms—say, Facebook and X. Additionally, Dreamwidth does not actively promote its service to minors and does not use features such as seamless pagination and push notification.

The court then also notes that if the law went into effect, companies would face irreparable injury, given the potential fines in the law.

This harm is particularly concerning given the high cost of violating the Act—$2,500 per offense—and the State’s failure to promulgate administrative rules enabling social media companies to avail themselves of the Act’s safe harbor provision before it takes effect on October 1, 2024.

Some users also sued to block the law, and the court rejected that request as there is no clear redressable injury for those plaintiffs yet, and thus they have no standing to sue at this point. That could have changed after the law started to be enforced, but thanks to the injunction from the NetChoice part, the law is not going into effect.

Utah will undoubtedly waste more taxpayer money and appeal the case. But, so far, these laws keep failing in court across the country. And that’s great to see. Kids have First Amendment rights too, and one day, our lawmakers should start to recognize that fact.

Filed Under: 1st amendment, age assurance, age verification, content moderation, kids, protect the children, robert shelby, social media, utah
Companies: netchoice

David Boies’ Baseless Lawsuit Blames Meta Because Kids Like Instagram Too Much

from the that's-not-how-any-of-this-works dept

In today’s episode of ‘Won’t Someone Think of the Children?!’, celebrity attorney David Boies is leading a baseless charge against Meta, claiming Instagram is inherently harmful to kids. Spoiler alert: it’s not. This one was filed last month and covered in the Washington Post, though without a link to the complaint, because the Washington Post hates you. In exchange, I won’t link to the Washington Post story, but directly to the complaint.

The fact that David Boies is involved should already ring alarm bells. Remember, Boies and his firm, Boies Schiller Flexner, have been involved in messes like running a surveillance operation against Harvey Weinstein’s accusers. He worked with RFK Jr. trying to silence bloggers for their criticism. He was also on Theranos’ board and was part of the campaign of that company to punish whistleblowers.

Indeed, Boies’s string of very icky connections and practices has resulted in many lawyers leaving his firm to avoid the association.

So, I’m sorry, but in general, it’s difficult to believe that a lawsuit like this is anything more than a blatant publicity and money grab when David Boies is involved. He doesn’t exactly have a track record of supporting the little guy.

And looking at the actual complaint does little to take away from that first impression. I’m not going to go through all of this again, but we’ve spent the past few years debunking the false claim that social media is inherently harmful to children. The research simply does not support this claim at all.

Yes, there is some evidence that kids are facing a mental health crisis. However, the actual research from actual experts suggests it’s a combination of factors causing it, none of which really appear to be social media. Part of it may be the lack of spaces for kids to be kids. Part of it may be more awareness of mental health issues, and new guidelines encouraging doctors to look for and report such issues. Part of it may be the fucking times we live in.

Blaming social media is not supported by the data. It’s the coward’s way out. It’s old people screaming at the clouds about the kids these days, without wanting to put in the time and resources to solve actual problems.

It’s totally understandable that parents with children in crisis are concerned. They should be! It’s horrible. But misdiagnosing the problem doesn’t help. It just placates adults without solving the real underlying issues.

But this entire lawsuit is based on this false premise, with some other misunderstandings sprinkled in along the way. While the desire to protect kids is admirable, this misguided lawsuit will only make it harder to address the real issues affecting young people’s mental health.

The lawsuit is bad. It’s laughably, sanctionably bad. It starts out with the typical nonsense moral panic comparing social media to actually addictive substances.

This country universally bans minor access to other addictive products, like tobacco and alcohol, because of the physical and psychological damage such products can inflict. Social media is no different, and Meta’s own documents prove that it knows its products harm children. Nonetheless, Meta has done nothing to improve its social media products or limit their access to young users.

First of all, no, the country does not “universally ban” minor access to all addictive products. Sugar is also somewhat addictive, and we do not ban it. But, more importantly, social media is not a substance. It’s speech. And we don’t ban access to speech. It’s like an immediate tell. Any time someone compares social media to actual poisons and toxins, you know they’re full of shit.

Second, the “documentation” that everyone uses to claim that Meta “knows its products harm children” is the various studies which they used as part of an internal research team trying to help make the products safer and better for kids.

But because the media (and grandstanding fools like Boies) falsely portray it as “oh they knew about it!”, they are guaranteeing that no internet company will ever study this stuff ever again. The reason to study it was to try to minimize the impact. But the fact that it leads to ridiculously misleading headlines and now lawsuits means that the best thing for companies to do is never try to fix things.

Much of the rest of this is just speculative nonsense about how features like “likes” and notifications are somehow inherently damaging to kids based on feels.

Meta is aware that the developing brains of young users are particularly vulnerable to certain forms of manipulation, and it affirmatively chooses to exploit those vulnerabilities through targeted features such as recommendation algorithms; social comparison functions such as “Likes,” “Live,” and “Reels”; audiovisual and haptic alerts (that recall young users to Instagram, even while at school and late in the night); visual filter features known to promote young users’ body dysmorphia; and content-presentation formats (such as infinite scroll) designed to discourage young users’ attempts to self-regulate and disengage from Instagram.

It amazes me how many of these discussions focus on “infinite scroll” as if it is obviously evil. I’ve yet to see any data that supports that claim. It’s just taken on faith. And, of course, the underlying issue with “infinite scroll” is not the scroll, but the content. If there were no desirable content, no “infinite scroll” is going to keep people on any platform.

So what they’re really complaining about is “this content is too desirable.”

And that’s not against the law.

Research shows that young people’s use of Meta’s products is associated with depression, anxiety, insomnia, interference with education and daily life, and other negative outcomes. Indeed, Meta’s own internal research demonstrates that use of Instagram results in such harms, and yet it has done nothing to lessen those harms and has failed to issue any meaningful warnings about its products or limit youth access. Instead, Meta has encouraged parents to allow their children to use Meta’s products, publicly contending that banning children from Instagram will cause “social ostracization.”

Again, this is false and misleading. Note that they say “associated” with those things, because no study has shown any causal reaction. The closest they’ve come is that those who are already dealing with depression and anxiety may choose to use social media more often. And that is an issue, and one that should be dealt with. But insisting that social media is inherently harmful to kids won’t help. The actual studies show that for most kids, it’s either neutral or helpful.

Supplying harmful products to children is unlawful in every jurisdiction in this country, under both state and federal law and basic principles of products liability. And yet, that is what Meta does every hour of every day of every year

This is nonsense. It’s not the product that’s harmful. It’s that there’s a moral panic full of boomers like Boies who don’t understand modern technology and want to blame Facebook for kids not liking them. Over and over again this issue has been studied and it has been shown that there is no inherent harm from social media. Claiming otherwise is what could do real harm to children by telling them the thing that they rely on every day to socialize with friends and find information is somehow evil and must be stopped.

Indeed, actual researchers have found that the real crisis for teens these days is the lack of social spaces where kids can be kids. Removing social media from those kids would only make that problem worse.

So, instead, we have a lawsuit backed by some of the most famous lawyers on the planet, pushing a nonsense, conspiracy-theory-laden moral panic. They argue that because kids like Instagram, Meta must be punished.

There’s a lot more in the actual lawsuit, but it only gets dumber.

If this lawsuit succeeds, it will be fair game on basically any popular app that kids like. This is a recipe for disaster. We will see tons of lawsuits, and apps aggressively blocking kids from using their services, cutting off tons of kids who would find those services useful and not problematic. It will also cut off kids from ways of communicating with family and friends, as well as researching information and learning about the world.

Filed Under: addiction, class action, david boies, hazardous materials, infinite scroll, moral panic, protect the children, toxins
Companies: instagram, meta

Rand Paul Is Right: Censoring The Internet Doesn’t Protect Kids

from the a-good,-clear-explanation dept

Last month, we shared the details of a really good “Dear Colleague” letter that Senator Rand Paul sent around urging other Senators not to vote for KOSA. While the letter did not work and the Senate overwhelmingly approved KOSA (only to now have it stuck in the House), Paul has now expanded upon that letter in an article at Reason.

It’s well worth the read, though the title makes the point clear: Censoring the Internet Won’t Protect Kids.

It starts out by pointing out how much good the internet can be for families:

Today’s children live in a world far different from the one I grew up in and I’m the first in line to tell kids to go outside and “touch grass.”

With the internet, today’s children have the world at their fingertips. That can be a good thing—just about any question can be answered by finding a scholarly article or how-to video with a simple search.

While doctors’ and therapists’ offices close at night and on weekends, support groups are available 24 hours a day, 7 days a week, for people who share similar concerns or have had the same health problems. People can connect, share information, and help each other more easily than ever before. That is the beauty of technological progress.

He correctly admits that the internet can also be misused, and that not all of it is appropriate for kids, but that’s no reason to overreact:

It is perhaps understandable that those in the Senate might seek a government solution to protect children from any harms that may result from spending too much time on the internet. But before we impose a drastic, first-of-its-kind legal duty on online platforms, we should ensure that the positive aspects of the internet are preserved. That means we have to ensure that First Amendment rights are protected and that these platforms are provided with clear rules so that they can comply with the law.

He points out that the law empowers the FTC to police content that could impact the mental health of children, but does not clearly define mental health disorders, and those could change drastically with no input from Congress.

What he doesn’t mention is that we’re living in a time when some are trying to classify normal behavior as a mental health disorder, and thus this law could be weaponized.

From there, he talks about the “duty of care.” That’s a key part of both KOSA and other similar bills and says that websites have a “duty of care” to make efforts to block their sites from causing various problems. As we’ve explained for the better part of a decade, a “duty of care” turns itself into a demand for censorship, as it’s the only way for companies to avoid costly litigation over whether or not they were careful enough.

Just last week, I got into a debate with a KOSA supporter on social media. They insisted that they’re not talking about content, but just about design features like “infinite scroll.” When asked about what kind of things they’re trying to solve for, I was told “eating disorders.” I pointed out that “infinite scroll” doesn’t lead to eating disorders. They’re clearly targeting the underlying content (and even that is way more complex than KOSA supporters realize).

Senator Paul makes a similar point in the other direction. Things like “infinite scroll” aren’t harmful if the underlying content isn’t harmful:

For example, if an online service uses infinite scrolling to promote Shakespeare’s works, or algebra problems, or the history of the Roman Empire, would any lawmaker consider that harmful?

I doubt it. And that is because website design does not cause harm. It is content, not design, that this bill will regulate.

As for stopping “anxiety,” Paul makes the very important point that there are legitimate and important reasons why kids may feel some anxiety today, and KOSA shouldn’t stop that information from being shared:

Last year, Harvard Medical School’s magazine published a story entitled “Climate Anxiety; The Existential Threat Posed by Climate Change is Deeply Troubling to Many Young People.” That article mentioned that among a “cohort of more than 10,000 people between the ages of 16 and 25, 60 percent described themselves as very worried about the climate and nearly half said the anxiety affects their daily functioning.”

The world’s most well-known climate activist, Greta Thunberg, famously suffers from climate anxiety. Should platforms stop her from seeing climate-related content because of that?

Under this bill, Greta Thunberg would have been considered a minor and she could have been deprived from engaging online in the debates that made her famous.

Anxiety and eating disorders are two of the undefined harms that this bill expects internet platforms to prevent and mitigate. Are those sites going to allow discussion and debate about the climate? Are they even going to allow discussion about a person’s story overcoming an eating disorder? No. Instead, they are going to censor themselves, and users, rather than risk liability.

He also points out — as he did in his original letter — that the KOSA requirements to block certain kinds of ads makes no sense in a world in which kids see those same ads elsewhere:

Those are not the only deficiencies of this bill. The bill seeks to protect minors from beer and gambling ads on certain online platforms, such as Facebook or Hulu. But if those same minors watch the Super Bowl or the PGA tour on TV, they would see those exact same ads.

Does that make any sense? Should we prevent online platforms from showing kids the same content they can and do see on TV every day? Should sports viewership be effectively relegated to the pre-internet age?

Even as I’ve quoted a bunch here, there’s way more in the article. It is, by far, one of the best explanations of the problems of KOSA and many other bills that use false claims of “regulating design” as an attempt to “protect the kids.” He also talks about the harms of age verification, how it will harm youth activism, and how the structure of the bill will create strong incentives for websites to pull down all sorts of controversial content.

There is evidence that kids face greater mental health challenges today than in the past. Some studies suggest this is more because of society’s openness to discussing and diagnosing mental health challenges. But there remains no compelling evidence that the internet and social media are causing it. Even worse, as Paul’s article makes abundantly clear, there is nothing out there suggesting that censoring the internet will magically fix those problems. Yet, that’s what KOSA and many other bills are designed to do.

Filed Under: 1st amendment, duty of care, free speech, kosa, mental health, protect the children, rand paul, teens

Court Sees Through California’s ‘Protect The Children’ Ruse, Strikes Down Kids Code

from the gee,-who-could-have-predicted dept

Friday morning gave us a nice victory for free speech in the 9th Circuit, where the appeals court panel affirmed most of the district court’s ruling finding California’s “Age Appropriate Design Code” unconstitutional as it regulated speech.

There’s a fair bit of background here that’s worth going over, so bear with me. California’s Age Appropriate Design Code advanced through the California legislature somewhat quietly, with little opposition. Many of the bigger companies, like Meta and Google, were said to support it, mainly because they knew they could easily comply with their buildings full of lawyers, whereas smaller competitors would be screwed.

Indeed, for a period of time it felt like only Professor Eric Goldman and I were screaming about the problems of the law. The law was drafted in part by a British Baroness and Hollywood movie director who fell deep for the moral panic that the internet and mobile phones are obviously evil for kids. Despite the lack of actual evidence supporting this, she has been pushing for laws in the UK and America to suppress speech she finds harmful to kids.

In the US, some of us pointed out how this violates the First Amendment. I also pointed out that the law is literally impossible to comply with for smaller sites like Techdirt.

The Baroness and the California legislators (who seem oddly deferential to her) tried to get around the obvious First Amendment issues by insisting that the bill was about conduct and design and not about speech. But as we pointed out, that was obviously a smokescreen. The only way to truly comply with the law was to suppress speech that politicians might later deem harmful to children.

California Governor Gavin Newsom eagerly signed the bill into law, wanting to get some headlines about how he was “protecting the children.” When NetChoice challenged the law, Newsom sent them a very threatening letter, demanding they drop the lawsuit. Thankfully, they did not, and the court saw through the ruse and found the entire bill unconstitutional for the exact reasons we had warned the California government about.

The judge recognized that the bill required the removal of speech, despite California’s claim that it was about conduct and privacy. California (of course) appealed, and now we have the 9th Circuit which has mostly (though not entirely) agreed with the district court.

The real wildcard in all of this was the Supreme Court’s decision last month in what is now called the Moody case, which also involved NetChoice challenging Florida’s and Texas’ social media laws. The Supreme Court said that the cases should be litigated differently as a “facial challenge” rather than an “as-applied challenge” to the law. And it seems that decision is shaking up a bunch of these cases.

But here, the 9th Circuit interpreted it to mean that it could send part of the case back down to the lower court to do a more thorough analysis on some parts of the AADC that weren’t as clearly discussed or considered. In a “facial challenge,” the courts are supposed to consider all aspects of the law, and whether or not they all violate the Constitution, or if some of them are salvageable.

On the key point, though, the 9th Circuit panel rightly found that the AADC violates the First Amendment. Because no matter how much California claims that it’s about conduct, design, or privacy, everyone knows it’s really about regulating speech.

Specifically, they call out the DPIA requirement. This is a major portion of the law, which requires certain online businesses to create and file a “Data Protection Impact Assessment” with the California Attorney General. Part of that DPIA is that you have to explain how you plan to “mitigate the risk” that “potentially harmful content” will reach children (defined as anyone from age 0 to 18).

And we’d have to do that for every “feature” on the website. Do I think that a high school student might read Techdirt’s comments and come across something the AG finds harmful? I need to first explain our plans to “mitigate” that risk. That sure sounds like a push for censorship.

And the Court agrees this is a problem. First, it’s a problem because of the compelled speech part of it:

We agree with NetChoice that the DPIA report requirement, codified at §§ 1798.99.31(a)(1)–(2) of the California Civil Code, triggers review under the First Amendment. First, the DPIA report requirement clearly compels speech by requiring covered businesses to opine on potential harm to children. It is well-established that the First Amendment protects “the right to refrain from speaking at all.”

California argued that because the DPIA reports are not public, it’s not compelled speech, but the Court (rightly) says that’s… not a thing:

The State makes much of the fact that the DPIA reports are not public documents and retain their confidential and privileged status even after being disclosed to the State, but the State provides no authority to explain why that fact would render the First Amendment wholly inapplicable to the requirement that businesses create them in the first place. On the contrary, the Supreme Court has recognized the First Amendment may apply even when the compelled speech need only be disclosed to the government. See Ams. for Prosperity Found. v. Bonta, 594 U.S. 595, 616 (2021). Accordingly, the district court did not err in concluding that the DPIA report requirement triggers First Amendment scrutiny because it compels protected speech.

More importantly, though, the Court recognizes that the entire underlying purpose of the DPIA system is to encourage websites to remove First Amendment-protected content:

Second, the DPIA report requirement invites First Amendment scrutiny because it deputizes covered businesses into serving as censors for the State. The Supreme Court has previously applied First Amendment scrutiny to laws that deputize private actors into determining whether material is suitable for kids. See Interstate Cir., Inc. v. City of Dallas, 390 U.S. 676, 678, 684 (1968) (recognizing that a film exhibitor’s First Amendment rights were implicated by a law requiring it to inform the government whether films were “suitable” for children). Moreover, the Supreme Court recently affirmed “that laws curtailing [] editorial choices [by online platforms] must meet the First Amendment’s requirements.” Moody, 144 S. Ct. at 2393.

The state’s argument that this analysis is unrelated to the underlying content is easily dismissed:

At oral argument, the State suggested companies could analyze the risk that children would be exposed to harmful or potentially harmful material without opining on what material is potentially harmful to children. However, a business cannot assess the likelihood that a child will be exposed to harmful or potentially harmful materials on its platform without first determining what constitutes harmful or potentially harmful material. To take the State’s own example, data profiling may cause a student who conducts research for a school project about eating disorders to see additional content about eating disorders. Unless the business assesses whether that additional content is “harmful or potentially harmful” to children (and thus opines on what sort of eating disorder content is harmful), it cannot determine whether that additional content poses a “risk of material detriment to children” under the CAADCA. Nor can a business take steps to “mitigate” the risk that children will view harmful or potentially harmful content if it has not identified what content should be blocked.

Accordingly, the district court was correct to conclude that the CAADCA’s DPIA report requirement regulates the speech of covered businesses and thus triggers review under the First Amendment.

I’ll note that this is an issue that is coming up in lots of other laws as well. For example, KOSA has defenders who insist that it is only focused on design, and not content. But at the same time, it talks about preventing harms around eating disorders, which is fundamentally a content issue, not a design issue.

The Court says that the DPIA requirement triggers strict scrutiny. The district court ruling had looked at it under intermediate scrutiny (a lower bar), found that it didn’t pass that bar, and said even if strict scrutiny is appropriate, it wouldn’t pass since it couldn’t even meet the lower bar. The Appeals court basically says we can jump straight to strict scrutiny:

Accordingly, the court assumed for the purposes of the preliminary injunction “that only the lesser standard of intermediate scrutiny for commercial speech applies” because the outcome of the analysis would be the same under both intermediate commercial speech scrutiny and strict scrutiny. Id. at 947–48. While we understand the district court’s caution against prejudicing the merits of the case at the preliminary injunction stage, there is no question that strict scrutiny, as opposed to mere commercial speech scrutiny, governs our review of the DPIA report requirement.

And, of course, the DPIA requirement fails strict scrutiny in part because it’s obviously not the least speech restrictive means of accomplishing its goals:

The State could have easily employed less restrictive means to accomplish its protective goals, such as by (1) incentivizing companies to offer voluntary content filters or application blockers, (2) educating children and parents on the importance of using such tools, and (3) relying on existing criminal laws that prohibit related unlawful conduct.

In this section, the court also responds to the overhyped fears that finding the DPIAs unconstitutional here would mean that they are similarly unconstitutional in other laws, such as California’s privacy law. But the court says “um, guys, one of these is about speech, and one is not.”

Tellingly, iLit compares the CAADCA’s DPIA report requirement with a supposedly “similar DPIA requirement” found in the CCPA, and proceeds to argue that the district court’s striking down of the DPIA report requirement in the CAADCA necessarily threatens the same requirement in the CCPA. But a plain reading of the relevant provisions of both laws reveals that they are not the same; indeed, they are vastly different in kind.

Under the CCPA, businesses that buy, receive, sell, or share the personal information of 10,000,000 or more consumers in a calendar year are required to disclose various metrics, including but not limited to the number of requests to delete, to correct, and to know consumers’ personal information, as well as the number of requests from consumers to opt out of the sale and sharing of their information. 11 Cal. Code Regs. tit. 11, § 7102(a); see Cal Civ. Code § 1798.185(a)(15)(B) (requiring businesses to conduct regular risk assessments regarding how they process “sensitive personal information”). That obligation to collect, retain, and disclose purely factual information about the number of privacy-related requests is a far cry from the CAADCA’s vague and onerous requirement that covered businesses opine on whether their services risk “material detriment to children” with a particular focus on whether they may result in children witnessing harmful or potentially harmful content online. A DPIA report requirement that compels businesses to measure and disclose to the government certain types of risks potentially created by their services might not create a problem. The problem here is that the risk that businesses must measure and disclose to the government is the risk that children will be exposed to disfavored speech online.

Then, the 9th Circuit basically gives up on the other parts of the AADC. The court effectively says that since the briefing was so focused on the DPIA part of the law, and now (thanks to the Moody ruling) a facial challenge requires a full exploration of all aspects of the law, the rest should be sent back to the lower court:

As in Moody, the record needs further development to allow the district court to determine “the full range of activities the law[] cover[s].” Moody, 144 S. Ct. at 2397. But even for the remaining provision that is likely to trigger First Amendment scrutiny in every application because the plain language of the provision compels speech by covered businesses, see Cal. Civ. Code §§ 1798.99.31(a)(7), we cannot say, on this record, that a substantial majority of its applications are likely to fail First Amendment scrutiny.

For example, the Court notes that there’s a part of the law dealing with “dark patterns” but there’s not enough information to know whether or not that could impact speech or not (spoiler alert: it absolutely can and will).

Still, the main news here is this: the law is still not going into effect. The Court recognizes that the DPIA part of the law is pretty clearly an unconstitutional violation of the First Amendment (just as some of us warned Newsom and the California legislature).

Maybe California should pay attention next time (he says sarcastically as a bunch of new bad bills are about to make their way to Newsom’s desk).

Filed Under: 9th circuit, aadc, ab 2273, age appropriate design code, california, dpia, gavin newsom, protect the children, rob bonta
Companies: netchoice

The Many Reasons Why NCMEC’s Board Is Failing Its Mission, From A NCMEC Insider

from the blowing-the-whistle dept

Yesterday we posted our latest podcast, with guest Don McGowan, former board member at NCMEC (the National Center on Missing and Exploited Children) and former general counsel or chief legal officer at Bungie and the Pokemon Company (where he would sometimes disagree with our coverage). In the podcast, he goes into great detail about why he left the NCMEC board, and why he felt the board had become rotten, captured by interests not aligned with the underlying mission of NCMEC, and more focused on making it look like they’re protecting kids than actually protecting kids.

Multiple people reached out to me last night after listening to it, noting that McGowan’s whistleblowing here is both explosive and extremely important for more people to know about. NCMEC is an important organization, and the work that it does is fundamental to actually helping to protect children. But its board has apparently been captured by extremists who support political positions and ideologies at odds with that mission.

Therefore, after receiving a few requests for a transcript, I put one together, and have highlighted some of the key points. In particular:

The whole thing is incredibly damning and worth either listening to or, now, reading:

Mike Masnick:
Hello and welcome to the Techdirt Podcast. I’m Mike Masnick. A few months ago on the podcast, we had Shelby Grossman and Riana Pfefferkorn from Stanford talking about their really incredible and detailed report on the CyberTipline, its opportunities and its challenges. As we noted in talking about that, it really highlighted both some of how important the CyberTipline is, but also how there were a bunch of challenges not necessarily because of the CyberTipline itself, or NCMEC, or anyone in the process, but just the basic realities of how the CyberTipline works, how the Fourth Amendment works and laws around that. The CyberTipline, of course, is run by NCMEC, the National Center for Missing and Exploited Children, and they do some really great work with the CyberTipline being one example of a few.

But over the years, I’ve had a few moments where I’ve grown somewhat frustrated by some aspects of NCMEC, including and maybe especially around its advocacy, in particular on some bills that I think were really problematic and actually, I think, put people in danger. For example, NCMEC advocated vigorously on behalf of FOSTA, which was a very problematic bill that became law and which I think has been a complete disaster since then, putting people’s lives at risk.

There are reports suggesting that many people have died because of this law. As far as I can tell, NCMEC has never commented on what a failure FOSTA has been and how it almost certainly did real harm to some of the people that NCMEC claims to want to protect. Similarly, NCMEC has advocated on behalf of KOSA, the Kids Online Safety Act that we’ve discussed many times and how it put many kids at risk, especially LGBTQ kids, by the nature of the way that KOSA is written. I’ve long wished that NCMEC would just focus on the actual good work that it does in the world rather than pushing for dodgy legislation.

So it caught my attention recently when Don McGowan wrote a thread on Blue Sky about quitting the NCMEC board. McGowan is a well-known lawyer who was most recently the general counsel at the video game company Bungie, and before that at the Pokemon company, and has also worked at Microsoft over the years. In his thread, he wrote about leaving NCMEC’s board for a variety of reasons regarding both the advocacy that the organization does and also some of the advocacy that it refuses to do, such as its refusal to come out against Project 2025 and its plan for opening up child labor laws to enable more kids to take dangerous jobs.

He also noted that in all the media interviews he’s done since leaving Bungie, few have asked him about this. So that struck me as something of a challenge to have him come on and talk about exactly this. So Don, welcome to the podcast.

Don McGowan
Thanks, Mike.

Mike Masnick
So I wanted to start out with the baseline of making it clear that I think both of us agree that NCMEC does some really good important work that does in fact save lives. So this is not a trashing of…

Don McGowan
That’s incredibly correct. I do my best when I go off about NCMEC to try and draw a bifurcation between the organization and its staff and the board. My off-going is against its board, which I think has been entirely captured by MAGA positions, and uses itself to make sure that no criticism will be drawn to those positions in ways that it can, and not to take action, not to say bad things with the CyberTipline, or any of the Code Adam work that the organization does, or any of the other great stuff that it does to help actual kids at actual risk. And if this was a video podcast, you folks listening to it would see, I am drinking from my NCMEC mug, or my NCMEC tumbler that was given to me by NCMEC for my time as a board member.

Mike Masnick
And how long were you on the board for?

Don McGowan
I was on the board for, well, I had a little hiatus in the middle because my first few years were on the board as a rep of Pokemon. And then after that, when I stopped being at Pokemon, I stopped being on the board for a few months. And then I went back on as just a regular civilian board member for about three more years. So my total years there were seven. I started my association with NCMEC during Pokémon Go.

Mike Masnick
That makes sense. I was looking at it, and it’s a fairly large board. And so how is the board constructed?

Don McGowan
In the charity world, are two types of boards, working boards and fundraising boards. The NCMEC board is more of a fundraising board than a working board. As for how does one get on it, I’m not on actual over the air radio, so I can use the technical term. It’s got a lot of people that are the usual DC-area cop fuckers. And a lot of people who want to be law enforcement adjacent. And some people who are there because their organizations have a relationship with NCMEC, like I was when I was at Pokemon. Although somewhat amusingly, Pokemon didn’t want me to talk about that association publicly, which will be a story in my upcoming book.

Mike Masnick
Wow, okay, okay.

Don McGowan
Yeah, I’m writing a book about my Pokemon years,

Mike Masnick
Interesting, very interesting. That will be something to look forward to. So let’s talk about the sort of advocacy that NCMEC has done. And I think, in my experience, as I mentioned in the intro, it really came to my attention when NCMEC came up very strongly in favor of FOSTA…

Don McGowan
I want to speak to that for a second, I’m sorry to interrupt you. You mentioned FOSTA at the jump. I was involved in the NCMEC work on FOSTA. I barely remember any of it because it was long ago, but I was involved. And I say this to say, feel free to take shots at it because I wouldn’t want you to get to the end and be like, ‘oh shit, I didn’t know, and I took shots at him right to his face.’ Nope, do it.

Mike Masnick
Okay. Okay. Excellent.

Don McGowan
I will spend my life expiating the sins of what I did in my past, and that’s a big one.

Mike Masnick
Okay, so it struck me as surprising, right? I mean, I had been aware of NCMEC. And in fact, at some point, a long time ago, I’d spoken to a board member at NCMEC in the early 2000s, I’d had a conversation with someone who was sort of explaining how NCMEC was functioning. And that person had indicated to me some dysfunction, but I hadn’t seen them really engage as much on the policy side outside of things directly related to NCMEC. Like I understood advocacy around things related to the CyberTipline. And, there was, as we mentioned, this report that Stanford did earlier this year, which recommended some legislative changes to help the CyberTipline, one of which was actually voted on and signed by the President a few weeks after that report came out.

That kind of advocacy, I totally get and make sense. It’s like, ‘how do we make the CyberTipline more effective, get around some of these problems that were discussed…’

Don McGowan
And that was a very good report. As somebody who knows how the sausage gets made. I read that report and I was like, damn, these folks, they did well.

Mike Masnick
Yeah. They put in a lot of work. I know they spent time at NCMEC for a few days and were watching over the shoulders of people working on the CyberTipline, understanding all of that, talking to people on all sides, all the different stakeholders. So that was great. And that kind of advocacy I get.

What surprised me specifically when the FOSTA situation came out was having, I think at the time it was NCMEC’s general counsel, go and testify before Congress that FOSTA was like this perfect solution and necessary and really, really helpful. And I felt personally that kind of misrepresented the law.

And I was kind of wondering why NCMEC, given the position it’s in, which is that it is a private nonprofit, but it is the only organization that is granted by law to be able to process child sexual abuse material as part of the CyberTipline, which leads to some people and some courts occasionally giving it quasi-governmental status. But it’s in this sort of unique position. I thought it was odd that it would then go out and publicly advocate for a law that seemed slightly tangential to its overall mission and what it was working on. And then that it was taking such an extreme position on it that went against what a whole bunch of other civil society folks were talking about and raising the concerns of this law. So I know that you said that was a long time ago and you might not remember the specifics…

Don McGowan
I’ll go back in. I’m hitting the memory banks as you were talking, and I’ve got some details.

Mike Masnick
So I’m curious if there was anything that you were aware of at the time that sort of led NCMEC to decide that they were going to go public and advocate for a bill like FOSTA?

Don McGowan
I’ll come at this a little bit obliquely. So, NCMEC is, as an organization, driven by its board. And aspects of NCMEC’s board are somewhat difficult to unpack unless you know the personalities of the humans sitting in the room and or if you’ve been in the room. And that’s always a shitty thing to say because you don’t get to be like, ‘you don’t understand because you’re not there.’ But there’s a little bit of it, except I was there, so I can tell you. How this stuff came about is: go back to what FOSTA was supposed to be and pretend you don’t understand what it turned into.

Mike Masnick
Okay.

Don McGowan
Okay, now remember, at the time it was a bill to cut down on human trafficking for the sex trade. And if it was that, I mean, one, protecting children is never a vice in American politics, and two, if it was that, that would have been a great thing to support.

Mike Masnick
Yes.

Don McGowan
And so you had people going, speaking up in support of it, who were speaking from the perspective of what they thought it would be. Now you and a lot of the civil society groups that spoke to it understood the actual mechanics of the law and what it would… You had a little bit of seeing into the future that you could do. You’re a little Nostradamus sometimes, Mike. A lot of us try to be, some of us succeed, a lot of us don’t.

I remember, you know, like that was one of the things was this bill, especially at that time and to a certain degree even today, NCMEC has no technical expertise in the building. They have a relationship with Thorn, which is Ashton Kutcher’s child protection charity. And Thorn does a lot of their technical work and carries the technical load in that space in a way that NCMEC’s just not set up to. And I chaired their tech committee for a few years, right? And so I actually co-chaired it with a guy who was a marketing manager at Charter. And he considers himself the tech brains of NCMEC, and he’s a marketing manager for an ISP.

So there was a guy in there, a guy on the board who ended up no longer being on the board, who was advocating for this geolocation app to help you know, like you’re walking down the street and it’ll ping your phone and say, a child was abducted here. And he thought this was such a fantastic idea, because it’d be great for awareness. I’m like why would anybody put this into your phone? This guy styled himself as the technical expert, right? So think about that. A guy who thinks that app is the greatest idea ever and should be the technical focus of the organization, is out there trying to set the guidelines.

This was a guy who we… there were a few of us who actually had a bingo game during board meetings of at what point is Lou going to bring up porn? And then we would work the word bingo into our next thing we said, and that would be how you would win bingo. If Lou mentioned porn at a time that you were ready to talk, you’d work it in and you’d win. If you’d picked that slot in the Squares game, you had first right to claim it for your victory, right? So we’re dealing with that level of sort of, you could write a script by this guy’s focus on this issue, fairly tangential to NCMEC’s mission. And so you had people setting that as a priority.

And so obviously, FOSTA was red meat to them.

Mike Masnick
Right. I mean, this is the problem with so many bills, right? You position them in one way. And if you don’t understand the mechanisms of how they’re actually gonna work, the bill sounds good. And even people today, the same with KOSA, right? You look at it and on its face, it sounds good. People want kids to be safe online. People want to stop sex trafficking. So these bills sound good. I guess I had assumed, apparently incorrectly, that NCMEC would have more sophistication than that regarding sort of the nature of these bills.

Don McGowan
You’d hope. There were traditionally a couple of board members from Facebook. And they were fairly displeased when NCMEC took that public position because that sort of happened without a lot of us knowing what was going on.

Mike Masnick
Interesting. The other thing that I had seen and I had written about this at the time and, maybe it was a little bit conspiracy theoryish on my part. But I did notice that the person who was chair of the board at that time was a lobbyist who happened to be lobbying for all of the major motion picture studios in Hollywood.

Don McGowan
Because that was Manus’s year, right?

Mike Masnick
Yeah, it was Manus. And that was coming out of what had been revealed…. I mean, all of these things connect in such weird ways, but had been revealed through the Sony Pictures hack years earlier that there had been this Project Goliath plan by some of the major motion picture studios to focus on sex trafficking as a way to pass laws that would undermine Section 230 and thereby harm Google. And so there’s this know, corkboard with red strings on it, where you could pull this Hollywood lobbyist connected to NCMEC, pushing for the bill that Hollywood had been talking about a few years earlier as its plan to get back at Google. I don’t know if you…

Don McGowan
Now, I’m going to interrupt Mike, because speaking of red meat, Mike and I have had some discussions over time. I have a slightly different attitude towards Section 230 than Mike does. I’m not going into that because it’s orthogonal to today’s conversation. But I can tell you, if there was an Always Sunny in Philadelphia murder stringboard going on anywhere, I never saw it. I don’t think that was Manus doing client work on the board. Manus was always very a two solitudes guy, right? And the streams never crossed.

Mike Masnick
So let’s get to this thread that you wrote on Bluesky. You posted about Project 2025, which…

Don McGowan
Well, yeah, didn’t specifically mean to be talking about Project 2025. It was more the thing that underlies that section of Project 2025, which is the let’s let kids do labor.

Mike Masnick
Right. So on the off chance that listeners aren’t unfamiliar with Project 2025, very quickly, it’s the Heritage Foundation’s plan for a new Trump administration. It’s basically a whole bunch of ex-Trump admin people, and they have this whole plan to like, these are going to be the policies, these are going to be the people that we’re going to put in place. Many, many of the policies are horrible in all sorts of ways. But specifically, the part that you called out, which is in there and is somewhat shocking, is this idea of changing child labor laws to allow for more kids’ access to dangerous jobs.

Don McGowan
It is to facilitate kids doing work that we thought we were done with in the 1800s.

Mike Masnick
Yes. And specifically, the way it’s framed is really kind of incredible because it says “some young adults show an interest in inherently dangerous jobs,

Don McGowan
The children yearn for the mines…

Mike Masnick
… but current rules forbid many young people, even if their family is running the business.” So, it’s can you exploit your kids in the mines doing such dangerous jobs? And so they want to say, with parental consent and proper training young adults should be allowed to work in more dangerous occupations. So you called this out in particular. Do you want to describe what was your view on…

Don McGowan
Sure, so I’ll speak to that. I come at it from a slightly unusual perspective, which I’m not going to go deeply into except to say, as a child, my old man was a miner. He ran mining companies. And so I somewhat, legendarily to my friends at least, was airlifted into northern Canada and left by the side of a lake for two months to help my old man find places to mine.

Mike Masnick
My goodness. Wow. That is quite a background story.

Don McGowan
That’s exactly. So I got a background story of living off the land in Northern Canada as a 12 year old. So speaking of dangerous jobs, I feel like I got a perspective. But so I remember, because obviously this Project 2025 thing didn’t come out of nowhere. And I think we’re all aware that there’s been some amount of permissiveness coming into labor laws, especially in what I always euphemistically refer to (because it bugs the crap out of them) as Central America, otherwise known as the middle of the country, and not what we usually refer to as Central America. But so in the middle part of the country, there’s been some relaxation of labor laws. I noticed this while I was still on the NCMEC board and put forward… We had this board platform for board discussions. It’s called Boardable. Anybody who’s worked on a board may know it. And it’s basically a discussion board back and forth. And so I found one of these bills and I was like, hey, shouldn’t we be taking a stand on this?

And somebody came back and said, ‘why?’ And I said, well, it’s right there in the fucking name. National Center for Missing and Exploited Children, to which a different board member, the aforementioned comms manager at Charter Communications, came back and was like, ‘well, there’s no sexual exploitation in this bill.’ I was like, ‘Oh, I didn’t really know that the org’s name was the National Center for Sexually Missing and Sexually Exploited Children. Do we only help families after hurricanes when the hurricane raped the child?’

Mike Masnick
Oh gosh.

Don McGowan
And that got another board member to say, ‘you know, listen, like you’re going off on one of your crusades again,’ which I’ll come back to in a second. ‘But, you need to understand, I want my kids to be able to get a paper route. So it’ll teach kids responsibility.’ And I was like, ‘One, no, you don’t. You’re a corporate executive. And two, we all know this isn’t about paper routes. This is about teenagers working in meatpacking plants. And they’re gonna lose their fingers, like they’re gonna lose their arms, because it’s gonna get cut off in the meat cutting machines. And we should be taking a stand on this. I’m sorry that many of you support a political party that thinks it’s politically expedient to do this. But we should, this is exactly the kind of thing we should care about.’

If we have a policy as an advocacy committee, and we did at the time, I don’t know if they do still, but if we have a policy and advocacy committee, this is exactly the type of thing on which the nation’s legislators would look to us for guidance. We should provide it. To which the answer was me getting a call from the chair of the board saying, ‘Hey dude, step off. Be a little more collegial if you would please.’ I was like, ‘all right, fine.’

So at that point, that was when I reached the conclusion we were coming up on the… that NCMEC has three board meetings a year. We’re coming up on the April board meeting of last year. And I sort of put in my head like, okay, let’s see how this board meeting goes. It’ll probably be my last one at this point.

I mentioned a few minutes ago about how people made a reference to me going on one of my crusades. I had a separate issue that I was fairly vocal on, which is, as I would describe it, I think it’s terrible that we have a political party that in this country has decided it’s politically expedient to set aside a group of children, namely trans kids, for state-sponsored political persecution. We should care about this and we should be speaking out about it. I’m sorry some of you don’t like this. I’m sorry trans kids make you feel icky. But this is the kind of thing on which we should be providing moral leadership to the legislators of the country. And we should be saying wrong is wrong.

You know, there was a fair amount of disagreement at that last board meeting, that April board meeting, NCMEC got a grant from the state of Texas and the grant was subject to return if the money was unused or misused. And I was like, we got to find out what “misused” means. If the state of Texas has decided there is an entire population of children that should not receive any support… and if we use this money and some of it goes to their benefit, they may want their money back. And one of the board members looks over and he goes, ‘Thunk! You know what that is? That’s the sound of you beating a dead horse.’

Mike Masnick
Wow.

Don McGowan
I was like, OK, you know what? In my mind, I said, I just quit. Didn’t speak for the rest of the meeting. Let the meeting end. Said goodbye to everybody. Walked out. Never went back. Got home. Flew back to Seattle. Got home, walked in, wrote up a note of resignation to the general counsel of NCMEC. Went to that Boardable software, posted a note saying, ‘This was my last board meeting. There’s going be a lot of you that are going to be happy to see the back of me. What you may not realize is I’m just as happy to see the back of you.’ Send. Send resignation. Peace out.

Mike Masnick
Wow.

Don McGowan
I mean, NCMEC has data in its stat banks that say some of the kids most at risk in the world are trans kids. And they ignore that data.

One other thing, a thing that underlay my feelings about the child labor issue is, I mean, who are the kids that are gonna get put in the meatpacking plants? Two categories of them. One is kids of undocumented people who’ve been brought in and have to pay off the debt to the coyote that brought them across the border. And the second one, and I hate to say it because we said at the jump, NCMEC does a lot of great work. There’s another group of people in America who does amazing things, and that’s foster parents. But there are some really shitty foster parents out there who get the kids from the foster agencies so they can use them for sex trafficking and so they can use them for labor. Give those kids what I’ll euphemistically call access to a meatpacking plant, and now they can be put to work in what we would traditionally understand to be slavery.

And that’s exactly what I expect to see happen, is those bad foster parents who are there to use these kids as a way to make money for themselves and wreck those kids’ lives. We need the state to protect them, because those kids literally have nobody else. And instead, those state legislatures have decided to make it easier for those kids to be persecuted.

You can probably even hear it in my voice, but if you can’t, I’ll say it. And that’s the kind of thing that makes me just outraged to my core that a group of people who receive 90% of their budget from the United States federal government can’t bring themselves to say slavery is bad.

Mike Masnick Yeah. That’s astounding. We talked about the states in the in the center of America. Arkansas in particular, I had written something about this because it was kind of incredible: at the very same time that they were pushing a law to allow kids to work in meatpacking plants, right after there had been some scandals regarding kids getting hurt in meatpacking plants. So the governor there was pushing this law for to allow kids to work in meatpacking plants. At that very same time, she was also pushing a law for kids internet safety. And so I couldn’t believe that they couldn’t put two and two together. They were saying we have to protect the children, we have to put in place all these laws to protect them from Facebook, but in order to send them into meatpacking plants to work. And the disconnect….

Don McGowan Yeah. Protecting children takes on a very specific meaning in these circles.

It’s protect the children from the things we don’t want the children to see and have access to. Right? That was an, and I don’t even like using the word insight. That was a thing I discovered spending time among them. I now can process how they think. And it gives me, you know I mentioned earlier that you’re kind of a Nostradamus? It gives me the ability to see around some corners.

Mike Masnick I sort of understood that there were activists who felt that way and definitely pushed kid safety as an excuse to keep certain information away from kids. I totally understood that. What is kind of shocking to me in this discussion is recognizing that those people are on the NCMEC board.

Don McGowan That’s the thing, they go somewhere. It’s because a lot of them, the personality trait that goes along with it, is a lot of them are all cop fuckers.

I sit slightly on the left. I’m certainly going to tell you I was to the window to the walls yesterday. Certainly, you’ve mentioned earlier that I use a lot of Bluesky. I actually blew up my Twitter account one day picking a fight with the alt-right. And so I now spend my time on a niche left-focused microblogging site. It’s fun. But the idea of a lot of these folks… like I’m a lot more cop positive than a lot of people I know, because I’ve spent time with the cops that are doing the work at NCMEC, right? Like I have sympathies for those folks because I know what their job is and what they’re going through. I mean, these are cops who are out there trying to save kids from sex trafficking, right? There are liaison officers at NCMEC from most of the three-letter agencies. And you know, until I started this podcast, I had pretty good relationships with them. I hope I still will afterwards…

Mike Masnick I think that’s sort of an important point that is worth underlining, and often gets lost in these discussions, which is, and we started out the podcast by talking about how there is good work that NCMEC does. And a lot of that is coordinating with law enforcement, doing the actual good things that you want law enforcement to do. This is not a universal condemnation of either law enforcement or NCMEC itself.

But within that, that opens up opportunities for people with sort of problematic viewpoints to abuse that system to their advantage. And that has always been my problem with… like going back to FOSTA. It was presented as, you know, you can present these things in a good way. Like stopping sex trafficking is obviously a good and important and virtuous goal overall.

But when you have people whose real mission, and this was true of many of the backers of FOSTA, was not to stop sex trafficking, but to stop all pornography, all adult content entirely, and as part of that, to end encryption and a bunch of other things. And they were using FOSTA and some of these other laws as a kind of stalking horse to begin that process. And they were not subtle about this. I mean, the National Center, what is it? Oh gosh, what’s NCOSE? It’s, they have a name that sounds like, kind of like NCMEC, the National Center on Sexual Exploitation, I think, which is this nonprofit that, I think they used to be called like the Moral Majority or something like that, who had very strong belief that that there should be no pornography anywhere ever. And they were huge supporters and lobbied very strongly in favor of FOSTA and were quite open.

I mean, somehow I’m on their mailing list. So they send me all their press releases in which they are very explicit that that this was Step One in their goal of ending all pornography everywhere. They have a very very strong view on things, but the fact that they sort of wrap it around this idea of ‘we’re stopping sex trafficking’ when it’s really this other thing. But I had always sort of mentally separated out there are groups like that, which you know where they’re coming from and you know are staffed by crazy people. And I had hoped that NCMEC just wouldn’t have been captured by that side of things.

Don McGowan So I’m going to speak obliquely to that by going to an entirely different topic, but you’ll see why I draw the connection in a minute. So a few years ago, we were under the regime of the host of Celebrity Apprentice, and there was a government policy to put children in cages. One might wonder, where was NCMEC on that?

Mike Masnick That’s a good question.

Don McGowan I’m about to give you the answer to that question.

Where NCMEC was on that was there were a collection of us inside who put some fairly significant pressure on the CEO at the time, who was a guy named John Clark, who had come to NCMEC after being the head of the US Marshals, to say, ‘hey, apparently the government is having trouble finding these kids’ families. Isn’t that what we specialize in? Let’s go do that.’

And so John eventually, you know, there were a number of board members at the time. I was not as alone as I was at the end, but there were a number of board members at the time who sort of shared that opinion and asked John like what the hell’s going on? And he finally came back and said, which is true, ‘we can only go where we’re asked.’ Right. Kind of like, you know, vampires and lost boys. We can’t come in the house unless you ask us. We couldn’t go because we weren’t asked. And we were like, well, please go start sending correspondence to the heads of the agencies to tell them: ‘We would like to become involved in this and help you find the families for these kids.’

Almost immediately, there became a board schism on that. And the board schism came from people who said, many of them turned out to be my adversaries in the discussions around where are we on the child labor issue. Their stalking horse for it was, ‘listen, our budget is funded by Congress. Congress is implementing these policies. Let’s not piss them off.’ And so we punched through that. And John Clark finally reached out to DHS and all the various agencies. And their answer was, ‘Yeah, thanks. We’ll ask you if we need you. If we’re interested, we’ll call. Did we call? No. So we don’t want your help. Thanks. If we need you, we’ll reach out.’

So that’s where NCMEC was on that issue, was very much a… those of us on the board who saw things a little differently than the other folks pushed to try and get involved but you know, obviously there was no discussion of let’s take a public stance. Because the people would have the number of votes on the board would have overpowered taking a public stance.

I remember there was some very strong advocates and some very some people who are my very good friends on the board stepped off after that because they were like, ‘You know what? I don’t want to be involved here anymore because my advocacy and my money can be used better elsewhere. And my time can be better used than flying into DC for these meetings.’ And so, sort of obliquely, I think that’s the answer to the, ‘where were they on a lot of these issues?’ Why would they take such a strong stance on FOSTA? Well, because you have board members who were ideologically aligned with the people who were proposing the bills.

And so, what I said earlier about the board member who was obsessed with porn, right? Do we think that guy was ever going to vote against us taking a stance in favor of FOSTA? No. And, you know, I mentioned earlier, one of the things I’m doing now is expiating my sins. In particular, it’s the, did I not see it? Like, did I get played by these motherfuckers? And I think I did. Right? And that’s a terrible thing you to realize about yourself is I got played by people. And I got played, you know, there were some of them that I still like as humans, and I have to sort of wrap my head around that, you know what, I got played by these motherfuckers. Because they had an ulterior motive, but they didn’t know how to achieve it. But what they did know is how to use me to achieve it.

Mike Masnick Huh. I mean, that’s a little harsh on yourself…

Don McGowan But it’s true. So, you know, that’s one of the reasons I took such a… ‘Let me start burning it down around KOSA.’ Was, well, because, you know what? I’m not gonna get played twice. You know, how did George W. Bush put it? Fool me once, won’t get fooled again.

Mike Masnick Right. It’s interesting as I’m thinking about it.. the argument that because so much of the funding for NCMEC comes from Congress… one sort of throughline in all of this is that NCMEC is unwilling to challenge child exploitation that is effectively blessed by the government, and is only willing to challenge it when it’s outside the government that is the problem. And some of that could be tied to the fact that so much of its funding comes from the government.

Don McGowan I think that’s accurate.

And that to me is almost an argument of, then shut the fuck up about everything.

Mike Masnick Yeah, yeah. I mean, that’s sort of how I feel…

Don McGowan If you’re only ever gonna support government policy, shut the fuck up…

Mike Masnick Yeah, because that leads you to bad places.

Don McGowan Yeah, don’t try and pretend you’re taking a stance. ‘Yeah, we take stances on issues that protect kids, so long as the government likes them.’ Or more directly, so long as the Republican Party likes them. This is the point where I say out loud, I was around NCMEC for the QAnon years. Those fuckers did nothing for actual child protection. Right? The QAnon people don’t care about children.

What they care about is being able to say they care about children.

And so, you know, they never cared about actual children because the actual children who were at highest risk during those years were the children of undocumented immigrants. And they could not have wanted more to round those kids up, put them in cages, and send them back outside of the United States.

Mike Masnick Are you suggesting that the board was sort of QAnon captured itself or just afraid to…

Don McGowan I mean, we didn’t take any public stances against QAnon. And that was the one that made me start first thinking like, huh, what the fuck is going on here? Like, why? We all know these people aren’t actually helping actual kids. Why are we not saying anything about that?

And it’s funny, given what we all think, I’ll tell you, some of the most aware of the problem directors were the ones from Facebook. Given what the internet thinks… It’s funny, I mentioned my Bluesky habit, and you mentioned my Bluesky habit a little while ago. Jess Miers showed up on Bluesky and started to talk about tech issues and immediately got hammered by a collection of scolds who all thought they knew better than she did about the issue that she had spent her career working on and chased her away. And so, you know, I sort of watched that and I was like, okay, that’s what happens to the people who are willing to stand up and say, ‘hey, let’s look at this because it had the same name as an issue that these people cared about.’ And, you know, we all use words based on our experience of them. And not everybody always uses the word in a dictionary definition, et cetera. We are all at risk of that particular problem, I guess is where I’m going.

Mike Masnick Fair enough. So I think to sort of round out the conversation, is there anything that can be done? Again, noting the good that NCMEC has done and the importance of the CyberTipline? Is there a way to fix NCMEC?

Don McGowan I think there is. I think the challenge is, as a private nonprofit, it’s always going to have to have a board. And its board members are going to be the people who are attracted to those kinds of boards. And especially when it’s a fundraising board, it’s going to be people with a certain political bent. Not always, but it’s likely to be, especially given how law enforcement adjacent it is. And so I would say what should happen is the organization should be properly captured by the federal government, become an agency of the federal government, stand as an independent agency of the federal government, similar to the way so many of the boards, et cetera, are statutorily say that it stands separate from the federal government so you don’t end up with people, with the agency worried about, for example, ‘oh crap, we shouldn’t protect kids in meatpacking plants because we might lose our congressional funding.’ Hypothecate the funding, to use the technical term, and then the funding is just there, hypothecated, not subject to annual reauthorization.

Right now, I think that’s one of the biggest issues with NCMEC, is that it has its annual funding resolution that has to go through, and one of the great sponsors of NCMEC was a person who isn’t always loved on the internet, but Senator John McCain. Senator John McCain could always be counted upon to sponsor the NCMEC funding resolution. And somebody would sponsor in the House.

I’ll tell you the other thing. When I was at Pokemon and I was handling government affairs for that company, you’d think the Pokemon name would do it, but there was not a member of Congress on Earth that wouldn’t take a meeting with a member of the board of NCMEC. Right? That organization, carries a lot of weight in DC. Democrats right through Republicans, nobody wouldn’t take my meeting. The entire organization is under the halo of, I said earlier, I’ve got people that I still like there, even though I have remember that they weren’t always aligned with the way I think. That organization sits under the halo of John Walsh.

John Walsh, who is, by the way, one of the greatest people I know, and two, exactly like that in real life. If you’ve seen him on America’s Most Wanted or any of his other shows, he is exactly like that. My wife and I sometimes look at each other and go, ‘get those dirt bags!’ Because that was the great John Walsh expression, ‘dirt bags!’ That man in my mind is one of the good men on earth. His wife is one of the great Americans. I think segregating out the organization and its mission from the organization and its public persona, I think is the way to rescue and save it.

Mike Masnick We could go down a rabbit hole, which is not worth it. I just want to note that like, the challenge of making it actually an independent government agency is then, especially for the CyberTipline, you start to run into Fourth Amendment issues over…

Don McGowan The Fourth Amendment issues are already there. There’s already case law on NCMEC and the Fourth Amendment.

Mike Masnick Yes, but not in every circuit, so there’s a possibility that it will… but yes, you’re right that they are already dealing with some of that already because it has been determined by Neil Gorsuch, in fact, that that they’re a wing of the US government…

Don McGowan And so I’ll say two things to that. One, if you think Supreme Court Justice Neil Gorsuch would say something different, and two, as is the issue for so many private organizations, the fact that it’s not in every circuit doesn’t leave anybody thinking that it’s not going to be. Any circuit court decision, take it from a guy who was general counsel slash chief legal officer for 20 years. Any circuit court decision is the law of America.

Because you don’t pray for circuit splits. Because circuit splits are expensive

Mike Masnick That is that is absolutely true.

All right. Well, I will let you go. But this has been a…

Don McGowan Thanks for giving me the chance to come on, Mike. I really appreciate…

Mike Masnick Yeah, it’s a really fascinating discussion, a really interesting look into NCMEC. And, I hope that that this gets sorted out, because I would like the organization to continue to do the good work that it does. And it worries me when they when they start promoting this nonsense or not protesting against other nonsense.

Don McGowan This stuff needs to have somebody who is on the inside who’s willing to talk.

Mike Masnick Yeah. Yeah. And so thank you so much for being willing to step up and to explain it.

Don McGowan Can I take one last minute of this to do a thing that I would appreciate being able to do?

To the folks out there who were hurt by FOSTA, I don’t have another way to put it, I’m sorry. I will spend my life trying to fix that sin.

Mike Masnick Well, thank you. Thank you for saying that. Thanks again for doing this and for speaking out and hopefully making more people aware of all this.

Don McGowan Thanks to you, Mike.

Mike Masnick And thanks to everyone for listening as well. And we will be back next week with another podcast.

Filed Under: don mcgowan, fosta, kosa, protect the children
Companies: ncmec

Schumer Advances KOSA: Congress’s Latest ‘But Think Of The Children’ Crusade

from the think-of-the-children...-eventually-electing-better-politicians dept

Apparently the only time Congress can get together to agree to something, it’s to give whoever is President the power to censor speech online. That’s the only conclusion I can come to regarding the widespread support for KOSA (the Kids Online Safety Act), which Senator Chuck Schumer has announced will be coming to the floor for a vote.

Our elected officials have been told again and again why KOSA is a dangerous bill that will enable targeted censorship of protected speech. They continue to push it forward and insist that it would never be abused. And, yes, the “updated” version of KOSA from earlier this year is better than earlier versions of KOSA, but it’s still a censorship bill.

The bill still retains its “duty of care” section, which the FTC can enforce. It requires websites to “exercise reasonable care” in the design of features to avoid harm. But harm remains a risk, often through no fault of any particular platform. We constantly see websites blamed for problematic decisions made by users. But users are always going to make problematic decisions, and under KOSA, whoever is in charge of the FTC can rake a company over the coals, claiming a failure to meet that duty of care.

It seems strange that Republicans, who seem to hate Lina Khan, now want to give her the power to go after Elon Musk’s ExTwitter for failing to properly protect users. But that’s what they’ll do.

On the flip side, why are Democrats giving a potential future Trump FTC the power to go after any website that is too “woke” by enabling LGBTQ content and thus failing its “duty of care” to protect the children?

Like so many powerful would-be censors, they only think about how exciting that censorship power will be in their own hands, and not in the hands of their political opponents.

Schumer is also bringing COPPA 2.0 to the floor. As we explained earlier this year, COPPA 2.0 basically takes the already problematic COPPA and makes it much worse. It might not be as inherently harmful as KOSA, but it’s still pretty harmful.

For one, this is just going to lead to more sites trying to ban teenagers from using their apps entirely, since it raises the age of restrictions from 13 to 16… and that will just mean more teens being taught to lie about their age.

Second, it effectively mandates privacy-destroying age verification by banning targeted ads to kids. But how do you know they’re kids unless you verify their ages? This idea is so short-sighted. The only way to ban “targeted” ads based on collected data is to first… collect all the same data. That seems like a real issue.

In addition, it will change the important “actual knowledge” standard for covered platforms (which is kinda necessary to keep it constitutional) to a “reasonably likely to be used” standard, meaning that even if websites make every effort to keep kids off their platform, all an enforcer needs to do is argue that they haven’t done enough because the platform was “reasonably likely to be used by” kids.

Both of these are “do something” bills. “Here’s a problem, we should do something, this is something.” They are something. They won’t help solve the problems, and are quite likely to make them worse.

But, politicians want the headlines about how they’re “protecting the children” which is exactly what the big news orgs will falsely repeat. What they should be noting is that these bills are about politicians cynically using children as props to pretend to do something.

Senators Marsha Blackburn (who said quite clearly that she wrote KOSA to “protect children from the transgender”) and Richard Blumenthal (who has made it clear that he’d just as soon kill the internet if it got him headlines) put out an obnoxious, exploitative statement about how this will save the children, when it will actually do tremendous harm to them.

Some questions remain about what will happen on the House side, as Speaker Mike Johnson has said they’ll look over whatever the Senate sends. But the existing House version of KOSA, while somewhat different than the Senate version, is equally problematic.

If you’d like to reach out to your elected officials in Congress about these bills, Fight for the Future has the StopKOSA website that includes a way to send emails. And EFF also has their own action center to contact your elected officials regarding KOSA.

Filed Under: censorship, chuck schumer, congress, coppa 2.0, duty of care, ftc, kosa, marsha blackburn, protect the children, richard blumenthal

Court To Indiana: Age Verification Laws Don’t Override The First Amendment

from the strict-scrutiny dept

We keep pointing out that, contrary to the uninformed opinion of lawmakers across both major parties, laws that require age verification are clearly unconstitutional*.

* Offer not valid in the 5th Circuit.

Such laws have been tossed out everywhere as unconstitutional, except in Texas (and even then, the district court got it right, and only the 5th Circuit is confused). And yet, we hear about another state passing an age verification law basically every week. And this isn’t a partisan/culture war thing, either. Red states, blue states, purple states: doesn’t matter. All seem to be exploring unconstitutional age verification laws.

Indiana came up with one last year, which targeted adult content sites specifically. And, yes, there are perfectly good arguments that kids should not have access to pornographic content. However, the Constitution does not allow for any such restriction to be done in a sloppy manner that is both ineffective at stopping kids and likely to block protected speech. And yet, that’s what every age-gating law does. The key point is that there are other ways to restrict kids’ access to porn, rather than age-gating everything. But they often involve this thing called parenting.

Thus, it’s little surprise that, following a legal challenge by the Free Speech Coalition, Indiana’s law has been put on hold by a court that recognizes the law is very likely unconstitutional.

The court starts out by highlighting that geolocating is an extraordinarily inexact science, which is a problem, given that the law requires adult content sites to determine when visitors are from Indiana and to age verify them.

But there is a problem: a computer’s IP address is not like a return address on an envelope because an IP address is not inherently tied to any location in the real world but consists of a unique string of numbers written by the Internet Service Provider for a large geographic area. (See id. ¶¶ 12–13). This means that when a user connects to a website, the website will only know the user is in a circle with a radius of 60 miles. (Id. ¶ 14). Thus, if a user near Springfield, Massachusetts, were to connect to a website, the user might be appearing to connect from neighboring New York, Connecticut, Rhode Island, New Hampshire, or Vermont. (Id.). And a user from Evansville, Indiana, may appear to be connecting from Illinois or Kentucky. The ability to determine where a user is connecting from is even weaker when using a phone with a large phone carrier such as Verizon with error margins up to 1,420 miles. (Id. ¶¶ 16, 19). Companies specializing in IP address geolocation explain the accuracy of determining someone’s state from their IP address is between 55% and 80%. (Id. ¶ 17). Internet Service Providers also continually change a user’s IP address over the course of the day, which can make a user appear from different states at random.

Also, users can hide their real IP address in various ways:

Even when the tracking of an IP address is accurate, however, internet users have myriad ways to disguise their IP address to appear as if they are located in another state. (Id. ¶ B (“Website users can appear to be anywhere in the world they would like to be.”)). For example, when a user connects to a proxy server, they can use the proxy server’s IP address instead of their own (somewhat like having a PO box in another state). (Id. ¶ 22). ProxyScrape, a free service, allows users to pretend to be in 129 different countries for no charge. (Id.). Virtual Private Network (“VPN”) technology allows something similar by hiding the user’s IP address to replace it with a fake one from somewhere else.

All these methods are free or cheap and easy to use. (Id. ¶¶ 21–28). Some even allow users to access the dark web with just a download. (Id. ¶ 21). One program, TOR, is specifically designed to be as easy to use as possible to ensure as many people can be as anonymous as possible. (Id.). It is so powerful that it can circumvent Chinese censors.

The reference to “Chinese censors” is a bit weird, but okay, point made: if people don’t want to appear as if they’re from Indiana, they can do so.

The court also realizes that just blocking adult content websites won’t block access to other sources of porn. The ruling probably violates a bunch of proposed laws against content that is “harmful to minors” by telling kids how to find porn:

Other workarounds include torrents, where someone can connect directly to another computer—rather than interacting with a website—to download pornography. (Id. ¶ 29). As before, this is free. (Id.). Minors could also just search terms like “hot sex” on search engines like Bing or Google without verifying their age. (Id. ¶ 32–33). While these engines automatically blur content to start, (Glogoza Decl. ¶¶ 5–6), users can simply click a button turning off “safe search” to reveal pornographic images, (Sonnier Decl. ¶ 32). Or a minor could make use of mixed content websites below the 1/3 mark like Reddit and Facebook

And thus, problem number one with age verification: it’s not going to be even remotely effective for achieving the policy goals being sought here.

With this background, it is easy to see why age verification requirements are ineffective at preventing minors from viewing obscene content. (See id. ¶¶ 14–34 (discussing all the ways minors could bypass age verification requirements)). The Attorney General submits no evidence suggesting that age verification is effective at preventing minors from accessing obscene content; one source submitted by the Attorney General suggests there must be an “investigation” into the effectiveness of preventive methods, “such as age verification tools.

And that matters. Again, even if you agree with the policy goals, you should recognize that putting in place an ineffective regulatory regime that is easily bypassed is not at all helpful, especially given that it might also restrict speech for non-minors.

Unlike the 5th Circuit, this district court in Indiana understands the precedents related to this issue and knows that Ashcroft v. ACLU already dealt with the main issue at play in this case:

In the case most like the one here, the Supreme Court affirmed the preliminary enjoinment of the Child Online Protection Act. See Ashcroft II, 542 U.S. at 660–61. That statute imposed penalties on websites that posted content that was “harmful to minors” for “commercial purposes” unless those websites “requir[ed the] use of a credit card” or “any other reasonable measures that are feasible under available technology” to restrict the prohibited materials to adults. 47 U.S.C. § 231(a)(1). The Supreme Court noted that such a scheme failed to clear the applicable strict scrutiny bar. Ashcroft II, 542 U.S. at 665–66 (applying strict scrutiny test). That was because the regulations were not particularly effective as it was easy for minors to get around the requirements, id. at 667– 68, and failed to consider less restrictive alternatives that would have been equally effective such as filtering and blocking software, id. at 668–69 (discussing filtering and blocking software). All of that is equally true here, which is sufficient to resolve this case against the Attorney General.

Indiana’s Attorney General points to the 5th Circuit ruling that tries to ignore Ashcroft, but the judge here is too smart for that. He knows he’s bound by the Supreme Court, not whatever version of Calvinball the 5th Circuit is playing:

Instead of applying strict scrutiny as directed by the Supreme Court, the Fifth Circuit applied rational basis scrutiny under Ginsberg v. New York, 390 U.S. 629 (1968), even though the Supreme Court explained how Ginsberg was inapplicable to these types of cases in Reno, 521 U.S. at 865–66. The Attorney General argues this court should follow that analysis and apply rational basis scrutiny under Ginsberg.

However, this court is bound by Ashcroft II. See Agostini v. Felton, 521 U.S. 203, 237–38 (1997) (explaining lower courts “should follow the case which directly controls”). To be sure, Ashcroft II involved using credit cards, and Indiana’s statute requires using a driver’s license or third-party identification software.10 But as discussed below, this is not sufficient to take the Act beyond the strictures of strict scrutiny, nor enough to materially advance Indiana’s compelling interest, nor adequate to tailor the Act to the least restrictive means.

And thus, strict scrutiny must apply, unlike in the 5th Circuit, and this law can’t pass that bar.

Among other things, the age verification in this law doesn’t just apply to material that is obscene to minors:

The age verification requirements do not just apply to obscene content and also burden a significant amount of protected speech for two reasons. First, Indiana’s statute slips from the constitutional definition of obscenity and covers more material than considered by the Miller test. This issue occurs with the third prong of Indiana’s “material harmful to minors” definition, where it describes the harmful material as “patently offensive” based on “what is suitable matter for . . . minors.” Ind. Code § 35- 49-2-2. It is well established that what may be acceptable for adults may still be deleterious (and subject to restriction) to minors. Ginsberg, 390 U.S. at 637 (holding that minors “have a more restricted right than that assured to adults to judge and determine for themselves what sex material they may read or see”); cf. ACLU v. Ashcroft, 322 F.3d 240, 268 (3d Cir. 2003) (explaining the offensiveness of materials to minors changes based on their age such that “sex education materials may have ‘serious value’ for . . . sixteen-year-olds” but be “without ‘serious value’ for children aged, say, ten to thirteen”), aff’d sub nom. in relevant part, 542 U.S. 656 (2004). Put differently, materials unsuitable for minors may not be obscene under the strictures of Miller, meaning the statute places burdens on speech that is constitutionally protected but not appropriate for children

Also, even if the government has a compelling interest in protecting kids from adult content, this law doesn’t actually do a good job of that:

To be sure, protecting minors from viewing obscene material is a compelling interest; the Act just fails to further that interest in the constitutionally required way because it is wildly underinclusive when judged against that interest. “[A] law cannot be regarded as protecting an interest ‘of the highest order’ . . . when it leaves appreciable damage to that supposedly vital interest unprohibited.” …

The court makes it clear how feeble this law is:

To Indiana’s legislature, the materials harmful to minors are not so rugged that the State believes they should be unavailable to adults, nor so mentally debilitating to a child’s mind that they should be completely inaccessible to children. The Act does not function as a blanket ban of these materials, nor ban minors from accessing these materials, nor impose identification requirements on everybody displaying obscene content. Instead, it only circumscribes the conduct of websites who have a critical mass of adult material, whether they are currently displaying that content to a minor or not. Indeed, minors can freely access obscene material simply by searching that material in a search engine and turning off the blur feature. (Id. ¶¶ 31–33). Indiana’s legislature is perfectly willing “to leave this dangerous, mind-altering material in the hands of children” so long as the children receive that content from Google, Bing, any newspaper, Facebook, Reddit, or the multitude of other websites not covered.

The court also points out how silly it is that the law only applies to sites with a high enough threshold (33%) of adult content. If the goal is to block kids’ access to porn, that’s a stupid way to go about it. Indeed, the court effectively notes that a website could get around the ban just by adding a bunch of non-adult imagery content.

The Attorney General has not even attempted to meet its burden to explain why this speaker discrimination is necessary to or supportive of to its compelling interest; why is it that a website that contains 32% pornographic material is not as deleterious to a minor as a website that contains 33% pornographic material? And why does publishing news allow a website to display as many adult-images as it desires without needing to verify the user is an adult? Indeed, the Attorney General has not submitted any evidence suggesting age verification would prohibit a single minor from viewing harmful materials, even though he bears the burden of demonstrating the effectiveness of the statute. Ultimately, the Act favors certain speakers over others by selectively imposing the age verification burdens. “This the State cannot do.” Sorrell v. IMS Health Inc., 564 U.S. 552, 580 (2011). The Act is likely unconstitutional.

In a footnote, the judge highlights an even dumber part of the law: that the 33% is based on the percentage of imagery, and gives a hypothetical of a site that would be required to age gate:

Consider a blog that discusses new legislation the author would like to see passed. It contains hundreds of posts discussing these proposals. The blog does not include images save one exception: attached to a proposal suggesting the legislature should provide better sexual health resources to adult-entertainment performers is a picture of an adult-entertainer striking a raunchy pose. Even though 99% of the blog is core political speech, adults would be unable to access the website unless they provide identification because the age verification provisions do not trigger based on the amount of total adult content on the website, but rather based on the percentage of images (no matter how much text content there is) that contain material harmful to minors.

The court suggests some alternatives to this law, from requiring age verification for accessing any adult content (though, it notes that’s also probably unconstitutional, even if it’s less restrictive) to having the state offer up free filtering and blocking tech for parents to make use of for their kids:

Indiana could make freely available and/or require the use of filtering and blocking technology on minors’ devices. This is a superior alternative. (Sonnier Decl. ¶ 47 (“Internet content filtering is a superior alternative to Internet age verification.”); see also Allen Decl. ¶¶ 38–39 (not disputing that content filtering is superior to age verification as “[t]he Plaintiff’s claim makes a number of correct positive assertions about content filtering technology” but noting “[t]here is no reason why both content filtering and age verification could not be deployed either consecutively or concurrently”)). That is true for the reasons discussed in the background section: filtering and blocking software is more accurate in identifying and blocking adult content, more difficult to circumvent, allows parents a place to participate in the rearing of their children, and imposes fewer costs on third-party websites.

And thus, due to the fact that the law is pretty obviously unconstitutional, the judge grants the injunction, blocking the law from going into effect. Indiana will almost certainly appeal and we’ll have to just keep going through this nonsense over and over again.

Thankfully, Indiana is in the 7th Circuit, not the 5th, so there’s at least somewhat less of a chance for pure nuttery on appeal.

Filed Under: 1st amendment, age gating, age verification, filters, indiana, protect the children, strict scrutiny, todd rokita
Companies: free speech coalition

‘Today We Save Our Children’ Says Governor Hochul, Signing Bill That Will Not Save Anyone

from the legislating-by-grandstanding-moral-panics dept

New York Governor Kathy Hochul’s response to the horrifying shootings in Buffalo in 2022 was not to look for ways to limit access to guns or improve mental health care. It was not to look into why law enforcement ignored the threats that the shooter had made, which they were aware of. It was not to figure out why the 911 dispatcher who answered the first call about the shooting hung up on the caller after getting mad at them for whispering.

No, it was to blame the internet.

Blaming the internet is a very convenient scapegoat for politicians who are in over their heads with societal-level problems.

On Thursday, Hochul became the living embodiment of the “won’t someone please think of the children” meme. She gleefully signed an easily unconstitutional bill that will not protect children, and which will likely do real harm. She signed the SAFE For Kids Act, which bans algorithmic feeds for kids. In signing the bill she literally said:

“Today, we save our children.”

There are just a few problems with this, all of which Hochul’s office (and the sponsors of this bill) have been told about, only to be dismissed as “talking points from big tech.”

Problem 1: There remains no study showing that algorithmic feeds are somehow “addictive” or even a problem. It’s all based on vibes (and adults who seem unable to put down their own phones).

Problem 2: What actual studies show is that if you force chronological feeds on people, a few things happen, none of which “save our children.” First, users get annoyed because they see less of the stuff they go to social media for. This doesn’t make them use less social media, it just makes them switch to other social media. It also exposes those on the chronological feed to more untrustworthy content and disinformation. I’m not sure why Kathy Hochul thinks that exposing kids to more disinformation is “saving our children,” but someone should ask her.

Problem 3: This bill requires age verification, which has already been ruled to be unconstitutional by multiple courts. It is also a privacy nightmare, as has been described multiple times in the past. Creating a world that puts kids’ private data at risk is not “saving our children.”

Problem 4: The requirement about how websites can order content is just a blatantly obvious First Amendment infringement. I mean, just imagine if the NY legislature told a newspaper that it could no longer prioritize some headlines over others and had to lay out the newspaper in the order the stories were written? Everyone would immediately recognize the First Amendment problems with such a law. But this is no different.

Problem 5: Algorithms are a hugely important tool in keeping kids safe online, by minimizing or hiding more harmful or problematic content. And Hochul and the NY legislature are telling social media companies that such tools must be removed from their arsenal.

Hochul told a reporter, “we’ve checked to make sure, we believe it’s constitutional.” And, that’s just laughable. Checked with whom? Every attempt I saw to call out these concerns was brushed off as “just spewing big tech’s talking points.”

The Constitution is not a “big tech talking point.” What the actual research shows is not a “big tech talking point.”

I’m not against chronological feeds as a general concept. They’re great for those that want them. Lots of services already offer them as an option. But mandating them, and especially mandating them for certain ages (necessitating dangerous age verification), doesn’t solve any legitimate problem and makes it harder for trust & safety teams to actually help protect kids.

I recognize that this signing happened the same day that Hochul’s approval ratings and favorability hit all-time lows. So, it’s no surprise that she’s trying populist nonsense and embracing moral panics. But perhaps she should try actually doing things to actually help, rather than things already proven harmful?

Filed Under: algorithmic feed, algorithms, kathy hochul, new york, protect the children, safe for kids act