nancy skinner – Techdirt (original) (raw)

NetChoice Sues California Once Again To Block Its Misguided ‘Social Media Addiction’ Bill

from the slow-down-california,-and-read-the-constitution dept

Earlier this year, California passed SB 976, yet another terrible and obviously unconstitutional bill with the moral panicky title “Protecting Our Kids from Social Media Addiction Act.” The law restricts minors’ access to social media and imposes burdensome requirements on platforms. It is the latest in a string of misguided attempts by California lawmakers to regulate online speech “for the children.” And like its predecessors, it is destined to fail a court challenge on First Amendment grounds.

The bill’s sponsor, Senator Nancy Skinner, has a history of relying on junk science and misrepresenting research to justify her moral panic over social media. Last year, in pushing for a similar bill, Skinner made blatantly false claims based on her misreading of already misleading studies. It seems facts take a backseat when there’s a “think of the children!” narrative to push.

The law builds on the Age Appropriate Design Code, without acknowledging that much of that law was deemed unconstitutional by an appeals court earlier this year (after being found similarly unconstitutional by the district court last year). This bill, like a similar one in New York, assumes (falsely and without any evidence) that “algorithms” are addictive.

As we just recently explained, if you understand the history of the internet, algorithms have long played an important role in making the internet usable. The idea that they’re “addictive” has no basis in reality. But the law insists otherwise. It would then ban these “addictive algorithms” if a website knows a user is a minor. It also has restrictions on when notifications can be sent to a “known” minor (basically no notifications during school hours or late at night).

There’s more, but those are the basics.

NetChoice stepped up and sued to block this law from going into effect.

California is again attempting to unconstitutionally regulate minors’ access to protected online speech—impairing adults’ access along the way. The restrictions imposed by California Senate Bill 976 (“Act” or “SB976”) violate bedrock principles of constitutional law and precedent from across the nation. As the United States Supreme Court has repeatedly held, “minors are entitled to a significant measure of First Amendment protection.” Brown v. Ent. Merchs. Ass’n, 564 U.S. 786, 794 (2011) (cleaned up) (quoting Erznoznik v. Jacksonville, 422 U.S. 205, 212-13 (1975)). And the government may not impede adults’ access to speech in its efforts to regulate what it deems acceptable for minors. Ashcroft v. ACLU, 542 U.S. 656, 667 (2004); Reno v. ACLU, 521 U.S. 844, 882 (1997). These principles apply with equal force online: Governments cannot “regulate [‘social media’] free of the First Amendment’s restraints.” Moody v. NetChoice, LLC, 144 S. Ct. 2383, 2399 (2024).

That is why courts across the country have enjoined similar state laws restricting minors’ access to online speech. NetChoice, LLC v. Reyes, 2024 WL 4135626 (D. Utah Sept. 10, 2024) (enjoining age-assurance, parental-consent, and notifications-limiting law); Comput. & Commc’n Indus. Ass’n v. Paxton, 2024 WL 4051786 (W.D. Tex. Aug. 30, 2024) (“CCIA”) (enjoining law requiring filtering and monitoring of certain content-based categories of speech on minors’ accounts); NetChoice, LLC v. Fitch, 2024 WL 3276409 (S.D. Miss. July 1, 2024) (enjoining ageverification and parental-consent law); NetChoice, LLC v. Yost, 716 F. Supp. 3d 539 (S.D. Ohio 2024) (enjoining parental-consent law); NetChoice, LLC v. Griffin, 2023 WL 5660155 (W.D. Ark. Aug. 31, 2023) (enjoining age-verification and parental-consent law).

This Court should similarly enjoin Defendant’s enforcement of SB976 against NetChoice members

As we’ve discussed, the politics behind challenging these laws makes it a complex and somewhat fraught process. So I’m glad that NetChoice continues to step up and challenge many of these laws.

The complaint lays out that the parental consent requirements in the bill violate the First Amendment:

The Act’s parental-consent provisions violate the First Amendment. The Act requires that covered websites secure parental consent before allowing minor users to (1) access “feed[s]” of content personalized to individual users, § 27001(a); (2) access personalized feeds for more than one hour per day, § 27002(b)(2); and (3) receive notifications during certain times of day, § 27002(a). Each of these provisions restricts minors’ ability to access protected speech and websites’ ability to engage in protected speech. Accordingly, each violates the First Amendment. The Supreme Court has held that a website’s display of curated, personalized feeds is protected by the First Amendment. Moody, 144 S. Ct. at 2393. And it has also held that governments may not require minors to secure parental consent before accessing or engaging in protected speech. Brown, 564 U.S. at 799;

So too do the age assurance requirements:

The Act’s requirements that websites conduct age assurance to “reasonably determine” whether a user is a minor, §§ 27001(a)(1)(B), 27002(a)(2), 27006(b)-(c), also violate the First Amendment. Reyes, 2024 WL 4135626, at 16 n.169 (enjoining age-assurance requirement); Fitch, 2024 WL 3276409, at 11-12 (enjoining age-verification requirement); Griffin, 2023 WL 5660155, at *17 (same). All individuals, minors and adults alike, must comply with this age-assurance requirement—which would force them to hand over personal information or identification that many are unwilling or unable to provide—as a precondition to accessing and engaging in protected speech. Such requirements chill speech, in violation of the First Amendment. See, e.g., Ashcroft, 542 U.S. at 673; Reno, 521 U.S. at 882.

It also calls out that there’s an exemption for consumer review sites (good work, Yelp lobbyists!), which highlights how the law is targeting specific types of content, which is not allowed under the First Amendment.

California Attorney General Rob Bonta insisted in a statement to GovTech that there are no First Amendment problems with the law:

“SB976 does not regulate speech,” Bonta’s office said in an emailed statement. “The same companies that have committed tremendous resources to design, deploy, and market social media platforms custom-made to keep our kids’ eyes glued to the screen are now attempting to halt California’s efforts to make social media safer for children” the statement added, saying the attorney general’s office would respond in court.

Except he said that about the Age Appropriate Design Code and lost in court. He said that about the Social Media Transparency bill and lost in court. He said that about the recent AI Deepfake law… and lost in court.

See a pattern?

It would be nice if Rob Bonta finally sat down with actual First Amendment lawyers and learned how the First Amendment worked. Perhaps he and Governor Newsom could take that class together so Newsom stops signing these bills into law?

Wouldn’t that be nice?

Filed Under: 1st amendment, addictive feeds, age assurance, algorithms, moral panic, nancy skinner, parental controls, rob bonta, social media, social media addiction
Companies: netchoice

California’s SB 680: Social Media ‘Addiction’ Bill Heading For A First Amendment Collision

from the the-1st-amendment-still-applies-in-california dept

Similar to the “Age Appropriate Design Code” (AADC) legislation that became law last year, California’s latest effort to regulate online speech comes in the form of SB 680, a bill by Sen. Nancy Skinner targeting the designs, algorithms, and features of online services that host user-created content, with a specific focus on preventing harm or addiction risks to children.

SB 680 prohibits social media platforms from using a design, algorithm, or feature that causes a child user, 16 years or younger, to inflict harm on themselves or others, develop an eating disorder, or experience addiction to the social media platform. Proponents of SB 680 claim that the bill does not seek to restrict speech but rather addresses the conduct of the Internet services within its scope.

However, as Federal Judge Beth Labson Freeman pointed out during a recent court hearing challenging last year’s age-appropriate design law, if content analysis is required to determine the applicability of certain restrictions, it becomes content-based regulation. SB 680 faces a similar problem.

Designs, Algorithms, and Features are Protected Expression

To address the formidable obstacle presented by the First Amendment, policymakers often resort to “content neutrality” arguments to support their policing of expression. California’s stance in favor of AADC hinges on the very premise that AADC regulates conduct over content. Sen. Skinner asserted the same about SB 680, emphasizing that the bill is solely focused on conduct and not content.

“We used our best legal minds available […] to craft this in a way that did not run afoul of those other either constitutional or other legal jurisdictional areas. [T]hat is why [SB 680] is around the design features and the algorithms and such.”

However, the Courts have consistently held differently, and precedent reveals that these bills are inextricably intertwined with content despite such claims.

The Supreme Court has long held that private entities such as bookstores (_Bantam Books, Inc. v. Sullivan (_1963)), cable companies (Manhattan Community Access Corporation v. Halleck (2019)), newspapers (Miami Herald Publishing Co. v. Tornillo (1974)), video game distributors (Brown v. Entertainment Merchants Association (2011)), parade organizers (Hurley v. Irish-American Gay, Lesbian and Bisexual Group of Boston (1995)), pharmaceutical companies (Sorrell v. IMS Health, Inc. (2011)), and even gas & electric companies (Pacific Gas and Electric Co. v. Public Utilities Commission (1986)) have a First Amendment right to choose how they curate, display, and deliver preferred messages. This principle extends to online publishers as well, as the Court affirmed in Reno v. ACLUin 1997, emphasizing the First Amendment protection for online expression.

Moreover, courts have explicitly recognized that algorithms themselves constitute speech and thus deserve full protection under the First Amendment. In cases like Search King, Inc. v. Google Technology, Inc. and Sorrell, the courts held that search engine results and data processing are expressive activities, and algorithms used to generate them are entitled to constitutional safeguards.

In a more recent case, NetChoice v. Moody (2022), the U.S. Court of Appeals for the Eleventh Circuit declared certain provisions of Florida’s social media anti-bias law as unconstitutional, affirming that social media services’ editorial decisions — even via algorithm — constitute expressive activity.

Further, The Supreme Court’s stance in Twitter, Inc. v. Taamneh (2023) supports the idea that algorithms are merely one aspect of an overall publication infrastructure, warranting protection under the First Amendment.

This precedent underscores a general reluctance of the courts to differentiate between the methods of publication and the underlying messages conveyed. In essence, the courts have consistently acknowledged that the medium of publication is intricately linked to its content. Laws like SB 680 and the AADC are unlikely to persuade the courts to draw any lines.

SB 680’s Not-So-Safe Harbor Provision is Prior Restraint

Sen. Skinner also suggested at a legislative hearing that SB 680 is not overly burdensome for tech companies due to the inclusion of a “safe harbor” provision. This provision offers protection to companies conducting quarterly audits of their designs, algorithms, and features that may potentially harm users under 16. Companies that “correct” any problematic practices within 60 days of the audit are granted the safe harbor.

However, the safe harbor provision is yet another violation of the First Amendment. In practice, this provision acts as a prior restraint, compelling tech companies to avoid publication decisions that could be seen as violations for users under 16. The requirement to “correct” practices before publication restricts their freedom to operate.

Recall that the AADC also includes a similar requirement for mandatory data privacy impact assessments (DPIAs). Although the State of California defended this provision by arguing that it doesn’t mandate companies to alter the content they host, Judge Freeman disagreed, noting that the DPIA provision in the AADC forces social media services to create a “timed-plan” to “mitigate” their editorial practices.

In reality, both the “safe harbor” provisions of the AADC and SB 680 lead to services refraining from implementing certain designs, algorithms, or features that could potentially pose risks to individuals under 16. This cautious approach even extends to features that may enhance the online environment for parents and children, such as kid-friendly alternatives to products and services offered to the general public.

The online world, like the offline world, carries inherent risks, and services continually strive to assume and mitigate those risks. However, laws like the AADC and SB 680 make it too risky for services to make meaningful efforts in creating a safer online environment, ultimately hindering progress towards a safer web.

SB 680 is a Solution in Search of a Lawsuit

In a manner akin to newspapers making decisions about the content they display above the fold, letters to the editor they choose to publish, or the stories and speakers they feature, social media services also make choices regarding the dissemination of user-created content. While newspapers rely on human editors to diligently apply their editorial guidelines, social media companies use algorithms to achieve a similar objective.

However, it is puzzling that newspapers rarely face the kind of political scrutiny experienced by their online counterparts today. The idea of the government telling the New York Times how to arrange their stories in print editions seems inconceivable. But for some reason, we don’t react with similar concern when the government attempts to dictate how websites should display user content.

Despite an abundance of legal precedents upholding First Amendment protections for the publication tools that enable the delivery of protected expression, California lawmakers persist with SB 680. The federal courts’ skepticism toward the AADC law should be a warning light: If SB 680 becomes law this Fall, California will once again find itself embroiled in an expensive legal battle over online expression.

Jess Miers is Legal Advocacy Counsel at Chamber of Progress. This article was originally published on Medium and republished here with permission.

Filed Under: 1st amendment, aadc, ab 2273, addiction, nancy skinner, prior restraint, sb 680, social media

California Lawmakers Say It’s Time To Regulate The Internet The Same Way China Does

from the elect-better-people dept

Here’s Part Two of my two parter about the Satanic Panic-level moral panic that has befallen the disconnected-from-reality California legislature (in a bipartisan way) as they seek to destroy the internet “to protect the children).

In my previous post, I covered the opening remarks by California Senator Nancy Skinner in support of her bill, SB 680, which is based off of her own complete nonsense misreading of an already sketchy study by an advocacy group. She made a bunch of blatantly false claims, such as that you could order fentanyl online faster than getting an Uber. Or that kids without interacting with social media algorithms saw eating disorder content every 39 seconds (the actual study showed that the fastest that any of their eight “sample” accounts saw any eating disorder content was after 8 minutes, and they were only testing TikTok). You can read the full analysis of why Skinner ought to retract her statement and pull the bill.

But, incredibly, the hearing went even further off the rails. There were two speakers who were given a chance to testify about the many problems with SB 680 (some of which I discussed in the first post). Leah Nitake from Technet and Jess Miers from Chamber of Progress. Both made clear, straightforward statements about how they (obviously) support the goal of better protecting children online (who doesn’t?), but highlighting the many problems of this bill, and how it will lead to harmful rather then helpful outcomes.

As Nitake noted:

SB680 prohibits any designs, features or algorithms that could cause a child user to take certain actions. But the bill is incredibly unclear about how that causation happens. One of the actions subject to liability is if a child develops an eating disorder. But it’s fair to ask, under this bill, would viewing a workout video be considered to trigger an eating disorder? Does cause mean that the platform is the only factor that caused the eating disorder, a contributing factor, a substantial factor? And would that apply to all child users, any child user or the average child user?

What about content related to recovering from an eating disorder that’s meant to be empowering but may actually be triggering for some child users?

These seem like pretty important questions. We’ll get to the lawmakers’ responses in a moment, but I’ll give you a bit of a spoiler: they don’t address them at all.

Miers was next up, and raised similar and related concerns:

The bill’s vague definition of addiction will leave platforms to make impossible decisions about what kids should see online. Given the bill’s liability provisions, platforms may choose to restrict access to California youth disproportionately impacting marginalized teens, including LGBTQ+ youth seeking support and teens in need of reproductive and sexual health information. Indeed, a recent Pew study found that the majority of teens consider social media a social lifeline.

SB680’s restrictions on designs, algorithms and features related to self-harm will inadvertently limit access to crucial self-help resources. This includes information on recognizing warning signs for suicide and algorithms that redirect at risk users to immediate help like the suicide hotline.

The bill similarly hampers platform’s ability to guide child users towards positive content when searching for information on disordered eating.

Lastly, SB680 will likely result in the removal of substance abuse resources from libraries, pharmacies, and other organizations. This includes information on identifying signs of drug use and obtaining lifesaving products like Narcan. Consequently, individuals may resort to unreliable sources, increasing the risks of associated with substance abuse.

Did the Assemblymembers presents address any of these issues? No, they did not.

Did the Assemblymembers ask either Miers or Nitake about these issues in order to look for ways to protect them. I don’t know Nitake, but Miers is an expert on this stuff, has worked in trust & safety before, and was before them, and ready to answer their questions on making sure the bill actually was protective of children, rather than harmful.

Instead, one by one, they made fools of themselves, and raised serious questions about the competence of the California legislature.

First up, Josh Lowenthal, son of two former California legislators, who seems to have the grandstanding nonsense moral panic performance art down pat:

I’m living this right now. As a dad of three adolescent girls, I’m in this and with respect to the opposition, the words I have for you are shame on you.

Shame on you? Shame on you? For pointing out that this terrible bill will harm children, not help them? Shame on you? For pointing out that the broad language of the bill will literally require companies to remove life-saving information, or to help guide people to useful resources around mental health and body image issues?

No, Assemblymember, Lowenthal, shame on you, for grandstanding in support of this dangerous nonsense.

For what it’s worth, Lowenthal then literally claims that all of the problems raised by Nitake and Meiers can be solved with AI, and he’s sure of that because he’s worked in tech.

I come from a career in tech. Some of the greatest minds we have in the state of California are in tech, and we do not have to put at risk these unintended consequences. The levels of sophistication that can be drawn in this artificial intelligence preclude the very things that you’re talking about, have complete and total faith in our tech community for doing the right thing and protecting our youth and measuring the right thing.

I mean, half of that is word salad, but he really seems to be arguing that if tech just could “nerd harder” a bit, they’d magically figure out how to stop bad stuff online, while keeping the good stuff. This is disconnected from reality, Q-Anon level nonsense. It’s not how any of this works.

And does Lowenthal really think that if Meta, Google, TikTok and others could wipe out harmful content with AI they wouldn’t have already done that?

His claim that he spent his career in tech made me wonder what his actual experience is. It looks like he spent years as an exec at FreeConferenceCall.com (which, amusingly, Techdirt used to use for our conference calls), and which initially existed as a kinda sketchy arbitrage play around local telephone exchanges. More recently he’s been at Plum, a company that helps MVNOs get set up, though it looks like T-Mobile just bought Plum at the same time it bought a few of the remaining MVNOs off the market.

Neither of these jobs would likely give him anywhere near the requisite knowledge on either AI or trust & safety challenges of handling dangerous content for kids. And it shows.

He goes on to complain that his own kids are questioning their self-worth and body image because of social media. Which raises the question of why he lets them use it. But, really, this is the same moral panic we’ve seen before. Remember, in 1878, it was Edison’s phonograph and aerophone that were (we were told by the NY Times) would lead to a “complete disorganization of society” where “men and women will flee from civilization.” Even worse, “business, marriage, and all social amusements will be thrown aside.”

There’s also some random nonsense from Lowenthal complaining about the kids these days and how they apparently don’t want to “become an astronaut” or “how to break ceilings that you may not know are in front of you.” And, I mean, sure. But silencing any controversial content on social media isn’t going to do anything about that.

Josh Lowenthal’s attempt to shame people for highlighting legitimate dangers of this bill that he supports, while suggesting that magic AI will make it all work is embarrassing for the state of California. Long Beach: next time elect someone who has a clue.

Incredibly, the next guy up, Assemblymember Bill Essayli, a lawyer and former Assistant US Attorney, is even worse. He asks the two speakers if they’d seen the Social Dilemma. As you’ll recall, our review of the Social Dilemma talks about how everything it accuses “big tech” of doing, it does itself. The movie is chock full of misinformation and Hollywood “dark patterns” to try to manipulate gullible people into believing false things. Apparently it worked on Essayli:

Question for the opposition. Have you guys seen the Social Dilemma on Netflix?

Have you? You haven’t seen it? Well, I highly encourage you to watch it because it’s very interesting when you watch what the experts in tech who built the systems talk about how they use the systems to manipulate human behavior and they rely on brain reward mechanisms like dopamine to attract and addict people to their platforms. And I think it’s been very, very successful. And you could see our kids today, they’re glued to their phones, they’re highly addicted.

I mean, the people in the Social Dilemma are hardly “experts.” The two most prominent voices are both selling stuff. One is selling fear, because he’s built an extraordinarily lucrative career out of scaring people about new technologies. Another, at the time the Social Dilemma was made, was literally running a company trying to sell software to help “protect” your kids from social media’s ills.

Think maybe they had a reason to play up the “evils” of the technology and the “power” of social media to give you dopamine hits? Maybe?

Well, Essayli has bought into it and says he hasn’t seen any benefits to social media:

So I haven’t seen a lot of benefits from kids being on social media to be honest. I think it’s been … on the balance, it’s been a lot more harmful. And so I do think it’s a public health issue. Whether this is the right solution, I don’t know. I plan to support it. I have a feeling you’ll gum it up in the court system. And so I just encourage that we as policy makers continue to really take this seriously and figure out a solution.

What you think, Assemblymember Essayli, and what reality (and lots and lots of research) show, are two very different things. Elsewhere, Essayli quotes the Surgeon General’s report, but it’s clear he didn’t read it. He perhaps, only read the one half sentence he read in his remarks. Because that same report actually details many of the benefits that Essayli insists he’s never seen.

Here, I’ll help you out, Assemblymember. This is from the Surgeon General’s report you pretended to read:

Social media can provide benefits for some youth by providing positive community and connection with others who share identities, abilities, and interests. It can provide access to important information and create a space for self-expression. The ability to form and maintain friendships online and develop social connections are among the positive effects of social media use for youth. These relationships can afford opportunities to have positive interactions with more diverse peer groups than are available to them offline and can provide important social support to youth. The buffering effects against stress that online social support from peers may provide can be especially important for youth who are often marginalized, including racial, ethnic, and sexual and gender minorities.

For example, studies have shown that social media may support the mental health and well-being of lesbian, gay, bisexual, asexual, transgender, queer, intersex and other youths by enabling peer connection, identity development and management, and social support. Seven out of ten adolescent girls of color report encountering positive or identity-affirming content related to race across social media platforms. A majority of adolescents report that social media helps them feel more accepted (58%), like they have people who can support them through tough times (67%), like they have a place to show their creative side (71%), and more connected to what’s going on in their friends’ lives (80%). In addition, research suggests that social media-based and other digitally-based mental health interventions may also be helpful for some children and adolescents by promoting help-seeking behaviors and serving as a gateway to initiating mental health care.

But, no, we should ban all that (which this bill will do), because Essayli personally is unaware of any benefits, even though these are all detailed in the report he claimed to have read.

Elect better fucking people.

Essayli’s answer is to go the ridiculous, blatantly unconstitutional, Utah route.

My personal views, I’ve made it clear is that I don’t think kids should be on social media. I think Utah’s done that. Some other states are doing that. I think social media is more harmful than tobacco, so we should treat it the way we treat tobacco. You cannot … try to find a kid who can buy tobacco in California. It’s almost impossible. So we can stop kids from getting tobacco. I think we can stop them from getting on social media.

Did Chapman University School of Law, where you got your law degree, not teach you the difference between constitutionally protected speech… and tobacco? Because, shit, that’s embarrassing.

Tobacco is a product. Speech is speech. One of those is protected by the Constitution. One of those is not. You’d think that somewhere during the years you spent in law school, or as an actual attorney in the Justice Department maybe someone, somewhere, would have taught you that?

Believe it or not, it gets worse.

There’s first a brief statement from one person who admits that the language is probably too broad and “problematic,” but says she’ll support it anyway because “it’s a starting point.” Which is not how anyone should be making laws.

Then up is Assemblymember Ash Kalra. I’ve heard great things about Kalra in the past from some smart people, so his statements here were incredibly disappointing. He literally suggests that if China can regulate the internet, so can California.

And as someone that’s on many platforms, including TikTok, the reality is if you look at in China, they regulate TikTok heavily and only allow for educational content for young people. And yet here we just presume that we don’t have control over it and it’s just completely not true. We do have some degree of control, and to our colleagues’ point, because of a lack of federal action we can’t just sit on our hands, especially given the fact that there’s technology, much of it is being created here in our own backyard, in my district and near my district. And so I do think that we have an extra obligation to protect our youth and to ensure that these very valuable … otherwise very valuable social media companies and experiences that they can provide are done so in a way that creates the least amount of harm possible to our youth. So I want to thank the senator and would also like to be added on as a co-author.

Um, Assemblymember Kalra, I’m not going to tell you how to do your job, but I might humbly suggest that when you’re suggesting we literally take a regulatory page from China’s giant authoritarian internet censorship regime, colloquially known as the Great Firewall of China… you’re already losing.

China can regulate TikTok heavily because it’s an authoritarian country with no freedom of speech.

I would hope that an elected official in the US would understand that?

And, with that Senator Skinner closed out the hearing, again showing how completely and ridiculously out of touch and confused she is. She claims that the bill was “carefully constructed” which no one who has read it would possibly believe.

But she closes with the only attempt of any of the speakers to respond to the specific, delineated harms raised by Miers and Nitake. Except she does a terrible job of it.

And so I did not want to, as I think somebody said to somebody, throw the baby out with the bath water. I wanted really to get at those aspects of it that create harm. And so that’s really what we attempted to do. Now on the broadness, if this bill were a private right of action, then I can certainly see that point. But given that if the only ability to enforce it is through our public prosecutors, I think most of us know that prosecutors don’t tend to take things to court and act on something unless they have good evidence that it is violated more than just the spirit of that. So that’s another reason it was designed that way. And I would guess it will be put in court and we will see how all of that goes.

Senator, have you met local prosecutors or state AGs? They’re often political animals, with political ambitions, happy to take on crazy cases for the headlines. The idea that they’ll only take on cases with “good evidence” that “violated more than just the spirit of that,” is ludicrous and completely ahistorical. We know how this goes. We’ve seen how it goes. And it does not go well. I could point to dozens of cases brought by state AGs that had no chance of succeeding, but were brought to get headlines, especially ones around how those AGs (who are usually trying to next get elected governor or senator) are “protecting the children.”

But, really, any law where people point out how broad it is and your best answer is “well I’m sure no on will abuse it” followed by that you “guess it will be put in court and we will see how all of that goes” should be a law that is pulled from consideration, shredded, and then burned in a fire pit.

Laws are abused all the time. Including by local prosecutors. But this one is even worse, because of the nature of the bill, EVEN IF prosecutors don’t “go to court” the very nature of the bill, and the broadness of the language mean that sites will feel compelled, to avoid any risk of liability, to remove all sorts of content, including the content that Miers and Nitake detailed: content that is tremendously helpful for marginalized groups and those at risk.

It is stunning, if not surprising, and depressing, that this is how the California legislature functions today. That it would (1) push bills based on misrepresenting junk science then (2) shame people for pointing out the very real dangers of the bill while (3) suggesting we go further in emulating Chinese censorship and banning children (who have rights too, you know) entirely from social media, while insisting that nothing good has happened to kids because of social media (despite tons of evidence to the contrary — including in reports you pretend to have read), suggests that the entire California legislature is not fit for purpose.

Elect better people, California. What we have now is a joke.

Filed Under: addiction, ash kalra, bill essayli, california, jess meiers, josh lowenthal, leah nitake, moral panic, nancy skinner, protect the children, sb 680, social media, social media addiction

California Senator Nancy Skinner Falls For Junk Science Moral Panic; Makes Blatantly False Claims In Support Of Social Media Addiction Bill

from the legislative-nonsense dept

What you see below is part one of a two parter about a terrible bill in California. It started out as a single post, but there was so much nonsense, I decided to break it up into two parts. Stay tuned for part two.

You may recall last year that California, in addition to the obviously unconstitutional Age Appropriate Design Code, also tried to pass a “social media addiction” bill. Thankfully, at the last minute, that bill was killed. But, this year, a version of it is back and it has tremendous momentum, and is likely to pass. And it’s embarrassing. California legislators are addicted to believing utter nonsense, debunked moral panic stories, making themselves into complete laughingstocks.

The bill, SB 680, builds on other problematic legislation from California and basically makes a mess of, well, everything. The short explanation of the bill is as follows:

This bill would prohibit a social media platform, as defined, from using a design, algorithm, or feature that the platform knows, or by the exercise of reasonable care should have known, causes child users, as defined, to do any of certain things, including experience addiction to the social media platform.

What the bill will actually do is enable it so that social media companies can be fined if any kid that uses them gets an eating disorder, inflicts harm (on themselves or others), or spends too much time on social media. That’s basically the law.

Now, the framers of the law will say that’s not true, and that the law will only fine companies who “should have known” that their service “caused” a child to do one of those three things, but no one here was born yesterday. We’ve seen how these things are blamed on social media all the time, often by very angry parents who need to blame someone for things that (tragically) many kids have dealt with before social media ever existed.

Social media is the convenient scapegoat.

It’s a convenient scapegoat for parents. For teachers. For school administrators. For the media. And especially for grandstanding politicians who want headlines about how they’re saving the children, but don’t want to put in the hard work to understand what they’re actually doing.

Remember, multiple recent studies, including from the American Psychological Association and the (widely misrepresented) Surgeon General of the US, have said there is no causal evidence yet linking social media to harmful activity. What the reports have shown is that there is a small number of children who are dealing with serious issues that lead them to harmful behavior. For those children, it is possible that social media might exacerbate their issues, and everyone from medical professionals to teachers to parents, should be looking for ways to help that small number of children impacted.

That’s not what any of these laws do, however.

Instead, they assume that this small group of children, who are facing some very real problems (which, again, have not been shown to have been caused by social media in any study) represents all kids.

Instead, the actual research shows much more clearly that social media is beneficial to a much larger group of children, allowing them to communicate and socialize. Allowing them to have a “third space” where they can interact with their peers, where they can explore interests. The vast majority of teens find social media way more helpful than harmful. In some cases, it’s literally life-saving.

But, parents, teachers, principals, politicians and the media insist that someone must be to blame whenever a child has an eating disorder (which pre-existed social media) or dies by suicide (ditto). And social media must be the problem, because they refuse to explore their own failings or society’s larger failings.

Look no further than the absolutely ridiculous hearing the California Assembly recently held about the bill. It’s a hearing that should be cause for Californians to question who they have elected. A hearing where one Assemblymember literally claimed that we should follow China’s lead in regulating social media (we’ll get to that in part II).

The hearing kicked off with the Senate Sponsor, Nancy Skinner, making up nonsense about kids and social media that has no basis in fact:

I think many of you are aware that we are facing an unprecedented and urgent crisis amongst our kids where there’s high levels of social media addiction. The numbers of hours per day that many of our young people spend on average on social media is beyond, at least my comprehension, but the data is there. There’s high levels of teen suicides and those that increase in teen suicides, while some people think about the pandemic, have been steadily increasing over the past 10 to 12 years. And in effect, began with the onset, that increase with onset of much of the social media. We also have evidence of the very easy ability for anyone, which includes our youth, to purchase fentanyl and other illegal substances on via social media sites as well as illegal firearms. And in fact, on the illegal substances like fentanyl laced drugs, it is quicker to procure such a substance on social media than it is to use your app and get your Lyft or Uber driver.

So, look, someone needs to call bullshit on literally every single point there. Regarding suicide data, we highlighted that today’s suicide rates are still noticeably below the highs in the 1990s. Yes, they’ve gone up over the last few years, but they are still below the highs, and why isn’t anyone looking at what caused suicide rates to drop so low in the late 90s and early 2000s. Perhaps it was because we weren’t living in a constant hellscape in which grandstanding politicians are screaming every day about how horrible everything is?

But, really, I need to absolutely call bullshit on the idea that you can order fentanyl faster than you can get a Lyft or an Uber driver. Because that’s not true. There is no world in which that is true. There is no reasonable human being on this planet who believes that it’s quicker to get fentanyl online than to get an Uber. That’s just Senator Nancy Skinner making up things to scare people. Shameful.

It’s reminiscent of the similar bullshit scare tactics used by supporters of FOSTA, who claimed that you could order a sex trafficking victim online faster than you could order a pizza. That was made up whole cloth, but it was effective in getting the law passed. Apparently Skinner is using the same playbook.

Skinner continues to lie:

if we look at teenage girls in particular or adolescent girls, that researchers posing as teen girls on social media platforms were directed to content on eating disorders every 39 seconds, regardless of any request or content request by the teen. So in other words, just the algorithm, the feature or design of the platform directed that teen girl to eating disorder content every 39 seconds and to suicide-oriented content less frequently, but still with high frequency.

So, again, this isn’t true. It’s a moral panic misreading of an already questionable study. The study was done by the organization the Center for Countering Digital Hate, which is very effective at getting headlines, generating moral panics and getting quoted in the news (and at getting donations). What it’s not good at is competent research. You can read the “report” here, which is not “research,” as Senator Skinner implies. And even its highly questionable report does not even come close to saying what Skinner claims.

CCDH’s study was far from scientific to start with. They set up JUST EIGHT accounts on TikTok (not other sites) pretending to be 13-year-olds (two each in 4 different countries) and gave half of them usernames that includes the phrase “loseweight.” This is not scientific. The sample size is ridiculously small. There are no real controls unless you consider that half the accounts didn’t have “loseweight” in their name. There is no real explanation for why “loseweight” other than they claim it’s typical for those with eating disorders to make a statement regarding the disorder in their usernames.

Then, they had the researchers CLICK ON AND LIKE videos that the researchers themselves decided were “_body image or mental health_” related (which is not just eating disorder or suicide related content). In other words, THE RESEARCHERS TRAINED THE ALGORITHM THAT THEY LIKED THIS CONTENT. Then acted surprised when the accounts that clicked on and liked “body image” or “mental health” videos… got more “mental health” and “body image” videos.

As for the 39 second number, that is NOT (as Skinner claimed) how often kids see eating disorder content. Not even close. 39 seconds is how often users might come across content that CCDH themselves defined as “body image” or “mental health” related. NOT “suicide” or “eating disorder” content. In fact, the report says the fastest any of their test accounts saw (again, a self-classified) “eating disorder” content was only after eight minutes. They don’t say how long it took for the other accounts.

Not every 39 seconds.

Nancy Skinner is lying.

And, again, CCDH themselves decides how they classify the content here. While CCDH includes just a few screenshots of TikTok content that they classified as problematic (allowing them to cherry pick the worst). But even then, they seem to take a VERY broad definition of problematic content. Many of the screenshots seem like… general teen insecurities? I mean, this is one of the examples they show of “eating disorder” content:

Others just seem like typical teen angst and/or dark humor. These politicians are so disconnected from teens and how they communicate, it’s ridiculous. I’ve mentioned it before, though I don’t talk about it much or in detail, but a friend died by suicide when I was in high school. It was horrible and traumatic. But also, if any of us had actually known that he was suffering, we would have tried to get him help. Some of the TikTok videos in question may be calls for help, where people can actually help.

But this bill would tell kids they need to suffer in silence. Bringing up suicidal ideation. Or insecurities. Or just talking about mental health, would effectively be banned under this bill. It would literally do the exact opposite of what grandstanding, disconnected, lying politicians like Nancy Skinner claim it will do.

Back to the CCDH report. Incredibly, the report claims that PHOTOS OF GUM are eating disorder content, because gum “is used as a substitute for food by some with eating disorders.”

Have no fear, Senator Skinner: if this bill becomes law, you’ll have saved kids across the state from… seeing gum? Or adding a hashtag that says #mentalhealthmatters.

This is a joke.

Senator Skinner should issue a retraction of her statement. And pull the bill from consideration.

Of course, the context in which this is all presented by Senator Skinner is that social media companies are doing “nothing” about this. But, again, this study was only about TikTok, one social media company. And, the report that she misread and misquoted makes it pretty clear that TikTok is actively trying to moderate such content, and the kids are continually getting around those moderation efforts. In the report, it discusses how eating disorder hashtags often have “healthy” discussions (Skinner ignores this), and then says (falsely) that TikTok “does not appear to… moderate” this content.

But, literally two paragraphs later, the very same report says that kids are constantly evading moderation attempts to keep talking about eating disorders:

Users evade moderation by altering hashtags, for example by modifying #edtok to #edtøk. Another popular approach for avoiding content moderation is to co-opt singer Ed Sheeran’s name, for instance #EdSheeranDisorder.

So, if TikTok is not moderating this content… why are kids getting around this non-existent moderation?

Indeed, other reports actually showed that TikTok appeared to be dealing with eating disorder content better than earlier platforms, in that it was inserting healthy content into such discussions, about how to eat and exercise in a healthy way. Of course, under CCDH’s definition, this is all evil “body image” content, which Nancy Skinner would prefer be silenced across the internet. How dare kids teach each other how to be healthy. Again, let them suffer in silence.

Meanwhile, as we’ve discussed, actual research from actual experts, have said that forcing social media to hide ALL discussion of eating disorders actually puts children at much greater risk. Because those with eating disorders still have them, and they tend to go to deeper, darker parts of the web. Yet, when those discussions happened on mainstream social media, it also allowed for the promotion of content to help guide those with eating disorders to recovery, including content from those who had recovered and sought to help others. But, under this bill, such content HELPING those with eating disorders would effectively be barred from social media.

Going back to what I said above about my friend in high school, if only he had spoken up. If only he had told friends that he was suffering. Instead, we only found out when he was dead. This bill will lead to more of that.

Bill 680 takes none of that nuance into account. Bill 680 doesn’t understand how important it is for kids to be able to talk and connect.

All based on one Senator misreading what is already junk science.

Senator Skinner’s statement is almost entirely false. What little is accurate is presented in a misleading way. And the underlying setup of the bill completely misunderstands children, mental health, body image issues, and social media. All in one.

It’s horrifying.

Skinner’s star witness, incredibly, is Nancy Magee, the superintendent of San Mateo schools. If you recognize that name, it’s because we’ve written about her before. She’s the superintendent who filed the most ridiculous, laughable, embarrassing lawsuit against social media companies accusing them of RICO violations, because some kids had trouble getting back into regular school routines immediately after they came back from COVID lockdowns. RICO violations!

Of all the superintendents in all of California, can’t you at least pick the one who hasn’t filed a laughably ridiculous joke of a lawsuit against social media companies that similarly misread a long list of studies, to try to paper over her own districts failures to helps kids deal with the stress of the pandemic?

I guess if you’re going to misread and lie about the impact of social media, you might as well team up with someone who has a track record of doing the same. Magee’s statement, thankfully, isn’t as chock full of lies and fake stats, but is mostly just general fear mongering, noting that teenagers use social media a lot. I mean, duh. In my day, teens used the phone a lot. Kids communicate. Just like adults do.

There is, also, Anthony Liu, from the California Attorney General’s office. You’d hope that he would bring a sense of reality to the proceedings, but he did not. It was just more fear mongering, and nonsense pretending to be about protecting the children. Liu had a colleague with him, bouncing a child on her lap as a prop, where Skinner chimed in, literally saying that it was an example of “the child we are trying to protect,” leading an Assemblymember to say “how can we say no?” to (apparently?) whichever side brings in more cute kids.

And, that’s where we’re going to end part I. Things went totally off the rails after that, when two speakers spoke out against the bill, and a bunch of Assemblymembers on the Committee completely lost their minds attacking the speakers, social media, children, and more.

Still, we’ll close with this. If Senator Nancy Skinner had any integrity, she’d retract her statement, admit she’d been too hasty, admit that the evidence does not, in fact, support any of her claims, and suggest that this bill needs a lot more thought and a lot more input from experts, not grandstanding and moral panics.

I’m not holding my breath, because you might not be able to order fentanyl as quick as you can order an Uber, but you sure as hell can expect a California state elected official to cook you up a grandstanding, moral panic-driven monstrosity with about as much effort as it takes to order an Uber.

Filed Under: addiction, body image, california, content moderation, eating disorder, junk science, mental health, nancy magee, nancy skinner, sb 680, social media, suicide
Companies: ccdh, tiktok

California Court Says New Records Law Covers Past Police Misconduct Records

from the your-tax-dollars-hard-at-work-hiding-stuff-from-taxpayers dept

The battle over public records in California continues. A new law made records of police misconduct releasable to the public, kicking off predictable legal challenges from law enforcement agencies not accustomed to accountability.

These agencies believe the law isn’t retroactive. In essence, they think the passage of the law allows them to whitewash their pasts by only providing records going forward from the law’s enactment. None other than the law’s author, Senator Nancy Skinner, has gone on record — with a letter to the Senate Rules Committee and the state Attorney General’s office — stating the law applies retroactively.

This has been ignored by the state AG, who has stated in records request denials that he believes the law can’t touch pre-2019 misconduct files. This is exactly what agencies challenging the law want to hear. Unfortunately for them, they’ve just been handed a loss by a California court.

A Contra Costa County judge on Friday refused to block public access to records of police misconduct that occurred before California’s new transparency law took effect, the first ruling in a string of police-backed lawsuits filed across the state.

Judge Charles Treat said it seemed unlikely the suing law enforcement unions would prevail on the merits while dumping the unions’ requested injunction. He pointed out the new law has no impact on past misconduct. All it does is make those records available to the public.

“If it was illegal in 2018, it’s illegal in 2019,” Treat said. “It doesn’t change the legal principles applicable to anyone’s conduct.”

This was said in response to the unions’ argument that the release of old records would introduce new liabilities for officers. The availability of records may make it easier to sue officers, but it doesn’t change the fact they were always potentially liable for misconduct. It just used to be a lot easier to hide this misbehavior from the public.

The impact of this bench ruling is muted by Treat blocking his own unblocking for another ten days to allow the union to appeal his decision.

There’s a chance this ruling will be overturned, despite Sen. Skinner’s clarification. And it’s not the only legal battle being waged over the new transparency. Multiple agencies are suing in multiple counties and it’s probably going to take a trip to the state Supreme Court to resolve the issue.

These agencies may state publicly they believe the law isn’t retroactive, but their actions say something different. The Inglewood PD went so far as to get permission from the city council to shred all pre-2019 misconduct records prior to the law’s enactment date. As a local attorney points out, why would they have bothered if they felt the law would only affect records generated after January 1, 2019?

First Amendment and police misconduct attorney Matthew Strugar predicted the unions’ challenge will ultimately fail.

[…]

Cities, too, had expected SB 1421 to disclose existing records, Strugar said. “Why was Inglewood running its shredding machines 24/7 before the New Year?” he asked.

It’s a question no cop or city legislator in Inglewood wants to answer. Thanks to their cooperative effort, the likelihood of the PD being sued over unreleased misconduct is almost nil… easy to do when there’s nothing left to sue over.

Filed Under: california, foia, nancy skinner, police discipline, public records, retroactive, transparency

California AG Steps Up To Help Cops Pretend New Public Records Law Doesn't Apply To Past Misconduct Docs

from the bros-before-accountability,-as-they-say dept

The bullshit debate over California police misconduct records continues. A new law granting the public access to police misconduct records for the first time in decades has resulted in a slew of public records requests. It’s also resulted in a slew of refusals and legal challenges.

Some law enforcement agencies (and their unions) have chosen to believe the law erases their past misdeeds. Although the law says nothing limiting access to records created prior to January 1, 2019, some agencies have decided the lack of specific language allows them to draw this inference from the missing words. Multiple lawsuits have hit the California court system, which may soon force the state’s Supreme Court to deal with this miss, even if it took a hard pass on one law enforcement union’s attempt to get a preemptive declaration that past misconduct records are off-limits.

If these law enforcement agencies were truly seeking clarity, they were given a crystal clear explanation of the legislative intent from none other than the law’s author, Senator Nancy Skinner.

[I]t is my understanding in enacting SB 1421 that the change in the law applies to all disclosable records whether or not they existed prior to the date the statute went into effect…

This isn’t the answer cops wanted. They wanted someone to tell them they could whitewash the past and stonewall the future. Instead, the law’s author told them the law applies retroactively. If they missed their opportunity to destroy these records prior to the law’s enactment, that’s on them.

But they’re getting a little help from the state’s top cop. State attorney general Xavier Becerra has decided retroactivity is still an open question, despite Sen. Skinner’s statement on the issue.

The attorney general’s response to a public records request seeking that information references some superior court challenges to the law’s application to past records brought by police unions.

“We will not disclose any records that pre-date January 1, 2019 at this time,” Mark Beckington, supervising deputy attorney general, said in a response last Friday to a request from freelance reporter Darwin BondGraham.

This sentence follows a very dubious assumption by the attorney general’s office.

[U]ntil the legal question of retroactive application of the statute is resolved by the courts, the public interest in accessing these records is clearly outweighed by the public’s interest in protecting privacy rights.

Oh, really? But whose privacy rights? The public may want to protect their own privacy rights, but I doubt they’re more concerned about protecting the “privacy” of public servants who committed misconduct on the public’s dime.

AG Becerra is deliberately confused by the retroactivity non-question. Sen. Skinner, the law’s author, is honestly confused.

“I find the AG’s interpretation puzzling considering that we have law enforcement agencies up and down the state, including our California Highway Patrol, releasing records…”

Also confusing: the AG was sent a copy of the same letter Skinner sent to the Senate Rules Committee clarifying the law’s retroactive powers.

Cops have a friend in high places. With this action, he’s the best friend a bad cop could have. But he’s only delaying the inevitable. These records will be in the public’s hands. If the courts somehow find in favor of law enforcement agencies, this only keeps the past a secret. Unless police misconduct is somehow also only a thing of the past, California cop shops will still be generating a whole lot of publicly-accessible documents.

Filed Under: california, foia, nancy skinner, police, police discipline, retroactive, transparency, xavier becerra

Author Of California's Public Records Law: The Law Covers Old Police Misconduct Files, Not Just The New Ones

from the stick-that-in-your-deliberate-obtuseness,-PD-officials dept

For the first time in years, California police misconduct records are accessible by the public. There’s a huge asterisk on that sentence because, so far, law enforcement agencies have been unwilling to hand them over.

One police department decided to purge all of its old records before the law went into effect, mooting the question with a questionable memory-holing. Other agencies have told requesters the law isn’t retroactive, pretending the law says something it doesn’t. A sheriff’s union tried to force the question by petitioning the state’s supreme court, but the court declined the opportunity to clarify the law’s ability to open up records of past misconduct.

At this point it’s clear PDs aren’t interested in complying with the new law. They’ll sit on records until they’re forced out of their hands by lawsuits. This isn’t how transparency is supposed to work. The law wasn’t a History Eraser button for old files and it certainly isn’t there to assist PDs in withholding documents they’re definitely obligated to turn over to the public.

Most law enforcement agencies appear to believe the law hit the reset on misconduct records, ordering them only to release records created past the point the law went into effect (January 1st, 2019). Again, the law says nothing about it only affecting records going forward, but since it doesn’t say anything specifically about past misconduct records, law enforcement agencies will continue to pretend it doesn’t affect those until courts tell them otherwise.

Whenever the courts take up the question, they’ll have to examine the bill-making process and the law itself to determine its legislative intent. The law doesn’t have to specifically order the release of pre-2019 documents if it’s clear legislators intended the law to be retroactive. Fortunately for those suing PDs over withheld documents, the legislation’s author has decided to clear the air on the law police departments are conveniently and deliberately misunderstanding.

In a one-page letter to the state Senate Rules Committee, Sen. Nancy Skinner (D-Berkeley), sought to clarify the intent of the law, which opens up records of shootings by officers, severe uses of force and confirmed cases of sexual assault and lying by officers.

In the letter obtained by The Times, Skinner said any relevant discipline records kept by a government agency should be disclosed under the new law, which was approved last year.

Therefore, it is my understanding in enacting SB 1421 that the change in the law applies to all disclosable records whether or not they existed prior to the date the statute went into effect,” Skinner wrote. “This is the standard practice for public records legislation in California.”

We’ll see how quickly this letter results in the lifting of the temporary restraining order secured by Contra Costa law enforcement agencies, which are being sued by California newspapers for refusing to turn over historical misconduct files. There doesn’t seem to be any room for misunderstanding in Skinner’s letter. But if anyone’s incapable of understanding crystal clear laws, it’s law enforcement agencies.

Filed Under: california, foia, nancy skinner, police, police misconduct, public records, retroactive