ash kalra – Techdirt (original) (raw)

Stories filed under: "ash kalra"

California Lawmakers Say It’s Time To Regulate The Internet The Same Way China Does

from the elect-better-people dept

Here’s Part Two of my two parter about the Satanic Panic-level moral panic that has befallen the disconnected-from-reality California legislature (in a bipartisan way) as they seek to destroy the internet “to protect the children).

In my previous post, I covered the opening remarks by California Senator Nancy Skinner in support of her bill, SB 680, which is based off of her own complete nonsense misreading of an already sketchy study by an advocacy group. She made a bunch of blatantly false claims, such as that you could order fentanyl online faster than getting an Uber. Or that kids without interacting with social media algorithms saw eating disorder content every 39 seconds (the actual study showed that the fastest that any of their eight “sample” accounts saw any eating disorder content was after 8 minutes, and they were only testing TikTok). You can read the full analysis of why Skinner ought to retract her statement and pull the bill.

But, incredibly, the hearing went even further off the rails. There were two speakers who were given a chance to testify about the many problems with SB 680 (some of which I discussed in the first post). Leah Nitake from Technet and Jess Miers from Chamber of Progress. Both made clear, straightforward statements about how they (obviously) support the goal of better protecting children online (who doesn’t?), but highlighting the many problems of this bill, and how it will lead to harmful rather then helpful outcomes.

As Nitake noted:

SB680 prohibits any designs, features or algorithms that could cause a child user to take certain actions. But the bill is incredibly unclear about how that causation happens. One of the actions subject to liability is if a child develops an eating disorder. But it’s fair to ask, under this bill, would viewing a workout video be considered to trigger an eating disorder? Does cause mean that the platform is the only factor that caused the eating disorder, a contributing factor, a substantial factor? And would that apply to all child users, any child user or the average child user?

What about content related to recovering from an eating disorder that’s meant to be empowering but may actually be triggering for some child users?

These seem like pretty important questions. We’ll get to the lawmakers’ responses in a moment, but I’ll give you a bit of a spoiler: they don’t address them at all.

Miers was next up, and raised similar and related concerns:

The bill’s vague definition of addiction will leave platforms to make impossible decisions about what kids should see online. Given the bill’s liability provisions, platforms may choose to restrict access to California youth disproportionately impacting marginalized teens, including LGBTQ+ youth seeking support and teens in need of reproductive and sexual health information. Indeed, a recent Pew study found that the majority of teens consider social media a social lifeline.

SB680’s restrictions on designs, algorithms and features related to self-harm will inadvertently limit access to crucial self-help resources. This includes information on recognizing warning signs for suicide and algorithms that redirect at risk users to immediate help like the suicide hotline.

The bill similarly hampers platform’s ability to guide child users towards positive content when searching for information on disordered eating.

Lastly, SB680 will likely result in the removal of substance abuse resources from libraries, pharmacies, and other organizations. This includes information on identifying signs of drug use and obtaining lifesaving products like Narcan. Consequently, individuals may resort to unreliable sources, increasing the risks of associated with substance abuse.

Did the Assemblymembers presents address any of these issues? No, they did not.

Did the Assemblymembers ask either Miers or Nitake about these issues in order to look for ways to protect them. I don’t know Nitake, but Miers is an expert on this stuff, has worked in trust & safety before, and was before them, and ready to answer their questions on making sure the bill actually was protective of children, rather than harmful.

Instead, one by one, they made fools of themselves, and raised serious questions about the competence of the California legislature.

First up, Josh Lowenthal, son of two former California legislators, who seems to have the grandstanding nonsense moral panic performance art down pat:

I’m living this right now. As a dad of three adolescent girls, I’m in this and with respect to the opposition, the words I have for you are shame on you.

Shame on you? Shame on you? For pointing out that this terrible bill will harm children, not help them? Shame on you? For pointing out that the broad language of the bill will literally require companies to remove life-saving information, or to help guide people to useful resources around mental health and body image issues?

No, Assemblymember, Lowenthal, shame on you, for grandstanding in support of this dangerous nonsense.

For what it’s worth, Lowenthal then literally claims that all of the problems raised by Nitake and Meiers can be solved with AI, and he’s sure of that because he’s worked in tech.

I come from a career in tech. Some of the greatest minds we have in the state of California are in tech, and we do not have to put at risk these unintended consequences. The levels of sophistication that can be drawn in this artificial intelligence preclude the very things that you’re talking about, have complete and total faith in our tech community for doing the right thing and protecting our youth and measuring the right thing.

I mean, half of that is word salad, but he really seems to be arguing that if tech just could “nerd harder” a bit, they’d magically figure out how to stop bad stuff online, while keeping the good stuff. This is disconnected from reality, Q-Anon level nonsense. It’s not how any of this works.

And does Lowenthal really think that if Meta, Google, TikTok and others could wipe out harmful content with AI they wouldn’t have already done that?

His claim that he spent his career in tech made me wonder what his actual experience is. It looks like he spent years as an exec at FreeConferenceCall.com (which, amusingly, Techdirt used to use for our conference calls), and which initially existed as a kinda sketchy arbitrage play around local telephone exchanges. More recently he’s been at Plum, a company that helps MVNOs get set up, though it looks like T-Mobile just bought Plum at the same time it bought a few of the remaining MVNOs off the market.

Neither of these jobs would likely give him anywhere near the requisite knowledge on either AI or trust & safety challenges of handling dangerous content for kids. And it shows.

He goes on to complain that his own kids are questioning their self-worth and body image because of social media. Which raises the question of why he lets them use it. But, really, this is the same moral panic we’ve seen before. Remember, in 1878, it was Edison’s phonograph and aerophone that were (we were told by the NY Times) would lead to a “complete disorganization of society” where “men and women will flee from civilization.” Even worse, “business, marriage, and all social amusements will be thrown aside.”

There’s also some random nonsense from Lowenthal complaining about the kids these days and how they apparently don’t want to “become an astronaut” or “how to break ceilings that you may not know are in front of you.” And, I mean, sure. But silencing any controversial content on social media isn’t going to do anything about that.

Josh Lowenthal’s attempt to shame people for highlighting legitimate dangers of this bill that he supports, while suggesting that magic AI will make it all work is embarrassing for the state of California. Long Beach: next time elect someone who has a clue.

Incredibly, the next guy up, Assemblymember Bill Essayli, a lawyer and former Assistant US Attorney, is even worse. He asks the two speakers if they’d seen the Social Dilemma. As you’ll recall, our review of the Social Dilemma talks about how everything it accuses “big tech” of doing, it does itself. The movie is chock full of misinformation and Hollywood “dark patterns” to try to manipulate gullible people into believing false things. Apparently it worked on Essayli:

Question for the opposition. Have you guys seen the Social Dilemma on Netflix?

Have you? You haven’t seen it? Well, I highly encourage you to watch it because it’s very interesting when you watch what the experts in tech who built the systems talk about how they use the systems to manipulate human behavior and they rely on brain reward mechanisms like dopamine to attract and addict people to their platforms. And I think it’s been very, very successful. And you could see our kids today, they’re glued to their phones, they’re highly addicted.

I mean, the people in the Social Dilemma are hardly “experts.” The two most prominent voices are both selling stuff. One is selling fear, because he’s built an extraordinarily lucrative career out of scaring people about new technologies. Another, at the time the Social Dilemma was made, was literally running a company trying to sell software to help “protect” your kids from social media’s ills.

Think maybe they had a reason to play up the “evils” of the technology and the “power” of social media to give you dopamine hits? Maybe?

Well, Essayli has bought into it and says he hasn’t seen any benefits to social media:

So I haven’t seen a lot of benefits from kids being on social media to be honest. I think it’s been … on the balance, it’s been a lot more harmful. And so I do think it’s a public health issue. Whether this is the right solution, I don’t know. I plan to support it. I have a feeling you’ll gum it up in the court system. And so I just encourage that we as policy makers continue to really take this seriously and figure out a solution.

What you think, Assemblymember Essayli, and what reality (and lots and lots of research) show, are two very different things. Elsewhere, Essayli quotes the Surgeon General’s report, but it’s clear he didn’t read it. He perhaps, only read the one half sentence he read in his remarks. Because that same report actually details many of the benefits that Essayli insists he’s never seen.

Here, I’ll help you out, Assemblymember. This is from the Surgeon General’s report you pretended to read:

Social media can provide benefits for some youth by providing positive community and connection with others who share identities, abilities, and interests. It can provide access to important information and create a space for self-expression. The ability to form and maintain friendships online and develop social connections are among the positive effects of social media use for youth. These relationships can afford opportunities to have positive interactions with more diverse peer groups than are available to them offline and can provide important social support to youth. The buffering effects against stress that online social support from peers may provide can be especially important for youth who are often marginalized, including racial, ethnic, and sexual and gender minorities.

For example, studies have shown that social media may support the mental health and well-being of lesbian, gay, bisexual, asexual, transgender, queer, intersex and other youths by enabling peer connection, identity development and management, and social support. Seven out of ten adolescent girls of color report encountering positive or identity-affirming content related to race across social media platforms. A majority of adolescents report that social media helps them feel more accepted (58%), like they have people who can support them through tough times (67%), like they have a place to show their creative side (71%), and more connected to what’s going on in their friends’ lives (80%). In addition, research suggests that social media-based and other digitally-based mental health interventions may also be helpful for some children and adolescents by promoting help-seeking behaviors and serving as a gateway to initiating mental health care.

But, no, we should ban all that (which this bill will do), because Essayli personally is unaware of any benefits, even though these are all detailed in the report he claimed to have read.

Elect better fucking people.

Essayli’s answer is to go the ridiculous, blatantly unconstitutional, Utah route.

My personal views, I’ve made it clear is that I don’t think kids should be on social media. I think Utah’s done that. Some other states are doing that. I think social media is more harmful than tobacco, so we should treat it the way we treat tobacco. You cannot … try to find a kid who can buy tobacco in California. It’s almost impossible. So we can stop kids from getting tobacco. I think we can stop them from getting on social media.

Did Chapman University School of Law, where you got your law degree, not teach you the difference between constitutionally protected speech… and tobacco? Because, shit, that’s embarrassing.

Tobacco is a product. Speech is speech. One of those is protected by the Constitution. One of those is not. You’d think that somewhere during the years you spent in law school, or as an actual attorney in the Justice Department maybe someone, somewhere, would have taught you that?

Believe it or not, it gets worse.

There’s first a brief statement from one person who admits that the language is probably too broad and “problematic,” but says she’ll support it anyway because “it’s a starting point.” Which is not how anyone should be making laws.

Then up is Assemblymember Ash Kalra. I’ve heard great things about Kalra in the past from some smart people, so his statements here were incredibly disappointing. He literally suggests that if China can regulate the internet, so can California.

And as someone that’s on many platforms, including TikTok, the reality is if you look at in China, they regulate TikTok heavily and only allow for educational content for young people. And yet here we just presume that we don’t have control over it and it’s just completely not true. We do have some degree of control, and to our colleagues’ point, because of a lack of federal action we can’t just sit on our hands, especially given the fact that there’s technology, much of it is being created here in our own backyard, in my district and near my district. And so I do think that we have an extra obligation to protect our youth and to ensure that these very valuable … otherwise very valuable social media companies and experiences that they can provide are done so in a way that creates the least amount of harm possible to our youth. So I want to thank the senator and would also like to be added on as a co-author.

Um, Assemblymember Kalra, I’m not going to tell you how to do your job, but I might humbly suggest that when you’re suggesting we literally take a regulatory page from China’s giant authoritarian internet censorship regime, colloquially known as the Great Firewall of China… you’re already losing.

China can regulate TikTok heavily because it’s an authoritarian country with no freedom of speech.

I would hope that an elected official in the US would understand that?

And, with that Senator Skinner closed out the hearing, again showing how completely and ridiculously out of touch and confused she is. She claims that the bill was “carefully constructed” which no one who has read it would possibly believe.

But she closes with the only attempt of any of the speakers to respond to the specific, delineated harms raised by Miers and Nitake. Except she does a terrible job of it.

And so I did not want to, as I think somebody said to somebody, throw the baby out with the bath water. I wanted really to get at those aspects of it that create harm. And so that’s really what we attempted to do. Now on the broadness, if this bill were a private right of action, then I can certainly see that point. But given that if the only ability to enforce it is through our public prosecutors, I think most of us know that prosecutors don’t tend to take things to court and act on something unless they have good evidence that it is violated more than just the spirit of that. So that’s another reason it was designed that way. And I would guess it will be put in court and we will see how all of that goes.

Senator, have you met local prosecutors or state AGs? They’re often political animals, with political ambitions, happy to take on crazy cases for the headlines. The idea that they’ll only take on cases with “good evidence” that “violated more than just the spirit of that,” is ludicrous and completely ahistorical. We know how this goes. We’ve seen how it goes. And it does not go well. I could point to dozens of cases brought by state AGs that had no chance of succeeding, but were brought to get headlines, especially ones around how those AGs (who are usually trying to next get elected governor or senator) are “protecting the children.”

But, really, any law where people point out how broad it is and your best answer is “well I’m sure no on will abuse it” followed by that you “guess it will be put in court and we will see how all of that goes” should be a law that is pulled from consideration, shredded, and then burned in a fire pit.

Laws are abused all the time. Including by local prosecutors. But this one is even worse, because of the nature of the bill, EVEN IF prosecutors don’t “go to court” the very nature of the bill, and the broadness of the language mean that sites will feel compelled, to avoid any risk of liability, to remove all sorts of content, including the content that Miers and Nitake detailed: content that is tremendously helpful for marginalized groups and those at risk.

It is stunning, if not surprising, and depressing, that this is how the California legislature functions today. That it would (1) push bills based on misrepresenting junk science then (2) shame people for pointing out the very real dangers of the bill while (3) suggesting we go further in emulating Chinese censorship and banning children (who have rights too, you know) entirely from social media, while insisting that nothing good has happened to kids because of social media (despite tons of evidence to the contrary — including in reports you pretend to have read), suggests that the entire California legislature is not fit for purpose.

Elect better people, California. What we have now is a joke.

Filed Under: addiction, ash kalra, bill essayli, california, jess meiers, josh lowenthal, leah nitake, moral panic, nancy skinner, protect the children, sb 680, social media, social media addiction