child safety – Techdirt (original) (raw)
Ctrl-Alt-Speech: Locate Your Nearest X-it
from the ctrl-alt-speech dept
Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.
Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.
In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:
- Spanish newspaper La Vanguardia follows The Guardian in quitting Elon Musk’s X due to disinformation and ‘disturbing content’ (Fortune)
- Bluesky attracts millions as users leave Musk’s X after Trump win (Reuters)
- Advertisers set to return to X as they seek favour with Elon Musk and Donald Trump (Financial Times)
- The plan to ban children under 16 from social media (The Times)
- Should smartphones be banned in schools? (Financial Times)
- Facebook and Instagram to Offer Subscription for No Ads in Europe (Facebook)
- An update on political advertising in the European Union (Google)
- Facebook’s Algorithms Think a Small English Community Is Up to No Good (Gizmodo)
- Phony X accounts are meddling in Ghana’s election (Rest of World)
- Sockpuppet network impersonating Americans and Canadians amplifies pro-Israel narratives on X (DFR Lab)
This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.
Filed Under: child safety, content moderation, donald trump, smartphones, social media
Companies: bluesky, facebook, meta, the guardian, twitteer, x
Ctrl-Alt-Speech: An Appeal A Day Keeps The Censor Away
from the ctrl-alt-speech dept
Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.
Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.
In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:
- States sue TikTok over app’s effects on kids’ mental health (CNBC)
- Risks vs. Harms: Youth & Social Media (danah boyd)
- Instagram and Threads moderation is out of control (The Verge)
- TikTok lays off hundreds in Malaysia in move toward AI moderation (Asia Nikkei)
- Meta ‘Supreme Court’ expands with European center to handle TikTok, YouTube cases (Washingon Post)
- The DSA article you didn’t know about, but should (T&S Insider)
- Streaming platform Kick bans Jack Doherty after he crashed his car on a livestream (Polygon)
- Truth Social Users Are Losing Ridiculous Sums of Money to Scams (Gizmodo)
- Hacked ‘AI Girlfriend’ Data Shows Prompts Describing Child Sexual Abuse (404 Media)
This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.
Filed Under: ai, artificial intelligence, child safety, content moderation, dsa, scams
Companies: facebook, instagram, kick, meta, tiktok, truth social
A Whole Bunch Of States File Garbage Grandstanding Lawsuits Against TikTok With The Main Complaint Being ‘Kids Like It’
from the what-a-waste-of-everyone's-time dept
You may have seen the news yesterday about 14 attorneys general filing lawsuits against TikTok. It was covered in a bunch of places, including Reuters, CNBC, the NY Times, the Washington Post, NPR, CNN and more. And, bizarrely, none of them seemed to include links to any actual complaint. It took me a little while to realize why. Partially it’s because the mainstream media often doesn’t link to actual complaints (though a lot of the sources named here are normally better about it). But more likely the issue was that this wasn’t “14 attorneys general team up to file a lawsuit” like we’re used to. It was “14 AGs coordinated to file individual lawsuits in their home state courts, alleging the same thing.”
We’ll get into a bit more detail shortly about what’s in the lawsuit, but the summary is “kids use TikTok too much, so your features that kids like must be illegal.” It’s really that simplistically stupid.
And it’s actually fifteen AGs, because on Friday, Ken Paxton also filed a case against TikTok that has some similarities. This suggests that the organizers of all these cases approached Paxton to join in. Then, like the total asshole he is, he decided to file a few days early to try to steal the thunder.
So, anyway, here are thirteen of these complaints. I am providing them here, despite the fact that Techdirt’s entire budget probably doesn’t cover the cost of coffee in the newsrooms of all of the publications listed above who refused to do the work that just took me quite some time.
Just something to think about when you consider which kinds of news orgs you want to support. It’s thirteen fourteen instead of fifteen because (1) Oregon hasn’t actually filed its yet, and says it will sometime today (a day after the rest), but it wasn’t available as I was writing this and (2) Kentucky doesn’t seem to have put out a press release about its filing (every other state did) (update: thanks to a reader for getting me Kentucky’s lawsuit which has now been added).
Anyway… it’s not worth going through all the complaints other than to note that most of them are quite similar and can be summed up as “TikTok made a product kids like to use, and we’re sure that violates consumer protection laws somehow.”
I’ll pick on New York’s filing out of the batch because it lays out the contents of the argument in a way that’s easy to see and recognize that they’re literally saying “oh, providing features users like is getting kids to use the site more.”
![I. TikTok’s Business Model is to Maximize Young Users’ Time on the Platform ............9 II. TikTok is Designed to Be Addictive ........................................................................... 11 A. Conscious Exploitation of Dopamine in Young Users ......................................... 11 B. TikTok Uses Multiple Features to Manipulate Users into Compulsive and Excessive Use .......................................................................................................12
- “For You” Feed ...........................................................................................12
- Autoplay .....................................................................................................13
- Endless Scroll ..............................................................................................13
- Ephemeral Content: TikTok Stories and TikTok LIVE ...............................14
- Push Notifications ......................................................................................14
- Likes, Comments, and Other Interactions ..................................................15 C. TikTok Designs and Provides Beauty Filters That It Knows Harm Young Users 16 D. TikTok Challenges Have Caused Deaths and Illegal Behavior ...........................20 III. Minors Are Especially Susceptible to Compulsive Use of TikTok .............................21 A. The United States Surgeon General’s Warning ...................................................23 B. Teen Mental Health in New York has Declined for Years ...................................24 C. Social Media Addiction Compared to Substance Addiction ................................24](https://i0.wp.com/lex-img-p.s3.us-west-2.amazonaws.com/img/f6020727-9fa5-4ac3-8b3f-5801dc749c63-RackMultipart20241009-195-i5v1yo.png?ssl=1)
This goes on for another couple pages, but it doesn’t get much better. There’s a heading that “journalists have reported on the harms on TikTok for years.” Which does not mean that TikTok is liable for kids doing stupid shit on TikTok. You would think that these high powered lawyers would also know that journalists aren’t always entirely accurate?
But just looking at the parts above, this lawsuit is laughable. First of all, any business’s focus is to try to maximize customers’ use. That’s… capitalism. Are all these states saying that restaurants that serve good food are violating consumer protection laws by getting people to want to come back frequently?
Again, features that people like are not illegal. We don’t let government decide what features go in software for good reason, and you can’t do that just because you claim it’s a consumer protection issue with no actual evidence beyond whims.
As for the TikTok challenges, that’s not TikTok doing it. It’s TikTok users, and such challenges have long predated TikTok and many of the reports of viral TikTok challenges are the media falling for myths and nonsense. Blaming TikTok for challenges is not just weird, it’s legally incomprehensible. Years ago such challenges would get sent around via email or Usenet forums or whatever. Did we sue email providers? Of course not.
That last section is also scientific nonsense. The Surgeon General’s report makes it quite clear that the scientific evidence does not say that social media is inherently harmful to mental health. And, no “social media addiction” is nothing like substance addiction, which is literally a chemical addiction.
These lawsuits are embarrassing nonsense.
If you file a lawsuit, you have to explain your cause of action. You don’t get to just say “infinite scroll is bad, therefore it violates consumer protection laws.” Notably in the NY complaint it’s 64 pages of screaming about how evil TikTok is and only gets to the actual claims at the very end, with basically no explanation. It just vaguely states that all of the stuff people are mad about regarding TikTok violate laws against “fraudulent” and “deceptive” business conduct.
Honestly, these cases are some of the weakest lawsuits I’ve ever seen filed by a state AG.
In many ways, they’re quite similar to the many, many lawsuits filed over the last couple of years against social media companies by school districts. Those were embarrassing enough, but at least I could understand that those were filed by greedy class action plaintiffs’ lawyers hoping to get a massive payday and not caring about the actual evidence.
Elected officials file these cases using taxpayer money. For what? Well, obviously for election season. Every single one of these AGs is at least a good enough lawyer to know that the lawsuits are absolute fucking garbage and are embarrassing.
But golly, it’s one month from election day. So why not get that press release out there claiming that you’re “protecting kids from the evils of TikTok”?
It’s cynical fucking nonsense. All of the Attorneys General involved should be ashamed of wasting taxpayer money, as well as valuable court time and resources, on such junk. There are plenty of legitimate consumer protection issues to take up. But we’re wasting taxpayer money because TikTok has a “for you” feed that tries to recommend more interesting content?
Come on.
Filed Under: addictive feeds, california, child safety, consumer protection, dc, for you, ken paxton, letitia james, moral panic, new york, rob bonta, state attorneys general, texas
Companies: tiktok
Ctrl-Alt-Speech: Smells Like Teen Safety
from the ctrl-alt-speech dept
Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.
Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.
In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:
- Breton’s resignation could mark a new chapter for EU digital policy (Euractiv)
- Finnish horse enthusiast is an EU tech front-runner (Politico)
- Instagram, Facing Pressure Over Child Safety Online, Unveils Sweeping Changes (New York Times)
- Instagram to make teenagers’ profiles private by default (Financial Times)
- AI chatbots might be better at swaying conspiracy theorists than humans (Arstechnica)
- SocialAI offers a Twitter-like diary where AI bots respond to your posts (TechCrunch)
- Meta bans Russian state media for ‘foreign interference’ (Reuters)
This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.
Filed Under: ai, artificial intelligence, chatbots, child safety, content moderation, teen safety, thierry breton
Companies: instagram, meta, socialai
Ctrl-Alt-Speech: Blunder From Down Under
from the ctrl-alt-speech dept
Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.
Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.
In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike is joined by guest host Riana Pfefferkorn, a Policy Fellow at the Stanford Institute for Human Centered AI. They cover:
- Australia threatens fines for social media giants enabling misinformation (Reuters)
- Social media ban for children to be introduced this year, but age limit undetermined (ABC)
- ASIO director-general Mike Burgess issues warning to big tech companies they may soon be forced to unlock encrypted chats (ABC)
- Utah Social Media Restrictions Likely Violate First Amendment, Judge Rules (Media Post)
- Nearly 40 states back surgeon general’s social media warning labels (The Verge)
- How TikTokers think about misinformation (Washington Post)
- Meta, TikTok, and Snap pledge to participate in program to combat suicide and self-harm content (TechCrunch)
This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.
Filed Under: asio, australia, child safety, content moderation, first amendment, social media, utah
Companies: snap, tiktok
Ctrl-Alt-Speech: I Bet You Think This Block Is About You
from the ctrl-alt-speech dept
Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.
Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.
IIn this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:
- Jim Jordan Demands Advertisers Explain Why They Don’t Advertise On MAGA Media Sites (Techdirt)
- TikTok Has a Nazi Problem (Wired)
- NazTok: An organized neo-Nazi TikTok network is getting millions of views (Institute for Strategic Dialogue)
- How TikTok bots and AI have powered a resurgence in UK far-right violence (The Guardian)
- Senate Passes Child Online Safety Bill, Sending It to an Uncertain House Fate (New York Tmes)
- The teens lobbying against the Kids Online Safety Act (The Verge)
- Social Media Mishaps Aren’t Always Billionaire Election Stealing Plots (Techdirt)
- X suspends ‘White Dudes for Harris’ account after massive fundraiser (Washington Post)
- Why Won’t Google Auto-complete ‘Trump Assassination Attempt’? (Intelligencer)
- ‘Technical glitch’ is no longer an excuse (Everything in Moderation from 2020)
- A message to our Black community (TikTok from 2020)
This episode is brought to you with financial support from the Future of Online Trust & Safety Fund, and by our sponsor Discord. In our Bonus Chat at the end of the episode, Mike speaks to Juliet Shen and Camille Francois about the Trust & Safety Tooling Consortium at Columbia School of International and Public Affairs, and the importance of open source tools for trust and safety.
Filed Under: child safety, content moderation, coppa, jim jordan, kosa, social media
Companies: google, tiktok, twitter, x
Senate To Kids: We’ll Listen To You When You Agree With Us On KOSA
from the listen-to-the-children...-not-those-kids dept
Apparently, Congress only “listens to the children” when they agree with what the kids are saying. As soon as some kids oppose something like KOSA, their views no longer count.
It’s no surprise given the way things were going, but the Senate today overwhelmingly passed KOSA by a 91 to 3 vote. The three no votes were from Senators Ron Wyden, Rand Paul, and Mike Lee.
There are still big questions about whether the House will follow suit, and, if so, how different their bill would be, and how the bills from the two chambers would be reconciled, but this is a step closer to KOSA becoming law, and creating all of the many problems people have been highlighting about it for years.
One thing I wanted to note, though, is how cynical the politicians supporting this have been. It’s become pretty typical for senators to roll out “example kids” as a kind of prop as for why they have to pass these bills. They will have stories about horrible things that happened, but with no clear explanation for how this bill would actually prevent that bad thing, and while totally ignoring the many other bad things the bill would cause.
In the case of KOSA, we’ve already highlighted how it would do harm to all sorts of information and tools that are used to help and protect kids. The most obvious example is LGBTQ+ kids, who often use the internet to help find their identity or to communicate with others who might feel isolated in their physical communities. Indeed, GOP support for KOSA was conditioned on the idea that the law would be used to suppress LGBTQ+ related content.
But, I did find it notable that, after all of the pro-KOSA team using kids as props to vote for the bill, how little attention was given last week to the ACLU sending hundreds of students to Congress to tell them how much KOSA would harm them.
Last week, the American Civil Liberties Union sent 300 high school students to Capitol Hill to lobby against the Kids Online Safety Act, a bill meant to protect children online.
The teenagers told the staffs of 85 lawmakers that the legislation could censor important conversations, particularly among marginalized groups like L.G.B.T.Q. communities.
“We live on the internet, and we are afraid that important information we’ve accessed all our lives will no longer be available,” said Anjali Verma, a 17-year-old rising high school senior from Bucks County, Pa., who was part of the student lobbying campaign. “Regardless of your political perspective, this looks like a censorship bill.”
But somehow, that perspective gets mostly ignored in all of this.
It would have been nice to have had an actual discussion on the policy challenges here, but from the beginning, KOSA co-sponsors Richard Blumenthal and Marsha Blackburn refused to take any of the concerns about the bill seriously. They frequently insisted that any criticism of the bill was just “big tech” talking points.
And, while they made cosmetic changes to try to appease some, the bill does not (and cannot) fix its fundamental problems. The bill is, fundamentally at its heart, a bill that is about censorship. And, while it does not directly demand censorship, the easiest and safest way to comply with the law will be to takedown whatever culture war hot topic politicians don’t like.
It’s kind of incredible that many of those who voted for the bill today were big supporters of the Missouri case against the administration (including Missouri’s Attorney General who brought that suit, Eric Schmitt, who voted in favor of KOSA today). So, apparently, according to Schmitt, governments should never try to influence how social media companies decide to take down content, but also government should have the power to take enforcement action against companies that don’t take down content the FTC decides is harmful.
There is a tremendous amount of hypocrisy here. And it would be nice if someone asked the senators voting in favor of this law why they were going against the wishes of all the kids who visited the Hill last week. After all, that’s what the senators who trotted out kids on the other side tried to do to those few senators who pointed out the flaws in this terrible law.
Filed Under: child safety, kids, kosa, mike lee, rand paul, ron wyden, senate, think of the children
Ctrl-Alt-Speech: A Lack Of (Under)Standing
from the ctrl-alt-speech dept
Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.
Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.
In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:
- Supreme Court Sees Through The Nonsense, Rejects Lower Courts’ Rulings Regarding Social Media Moderation (Techdirt)
- How Mark Zuckerberg’s Meta Failed Children on Safety, States Say (New York Times)
- New Features to Help Protect Our Community (Snap)
- Trends in Financial Sextortion (Thorn/NCMEC)
- Teens lean on AI for mental health support (Mercury News)
- I Paid $365.63 to Replace 404 Media With AI (404 Media)
- Companies see backlog in DSA transparency database (Euronews)
This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.
Filed Under: ai, artificial intellgience, child safety, dsa, first amendment, free speech, jawboning, mark zuckerberg, murthy v. missouri, supreme court
Companies: meta, snap
Techdirt Podcast Episode 396: Raising Kids In A Digital World
from the hat-trick dept
We weren’t planning to do a series, but after our last two episodes with Alice Marwick and then Candice Odgers, things have lined up nicely for a trifecta of episodes about the current moral panic around kids and social media. This week, we’re joined by Dr. Devorah Heitner, an expert on kids and technology and author of the recent book Growing Up In Public, as well as a Substack about mentoring kids in a connected world, to discuss what parents really need to know about kids, social media, and the internet.
Follow the Techdirt Podcast on Soundcloud, subscribe via Apple Podcasts or Spotify, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.
Filed Under: child safety, devorah heitner, podcast, social media
Techdirt Podcast Episode 395: What An Actual Expert Thinks About Kids & Social Media
from the genuine-insight dept
In the conversation about keeping kids safe online, the actual experts with the most to offer are all too often treated as outsiders and interlopers. One such expert is Candice Odgers, Professor of Psychological Science and Informatics at the University of California Irvine, who has recently been involved in a lot of debates against people who are very confident despite having far less information and expertise. This week, she joins us for something of a follow-up to our previous episode, to have a more productive discussion about the real challenges with kids and social media, and the real efforts to address them.
Follow the Techdirt Podcast on Soundcloud, subscribe via Apple Podcasts or Spotify, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.
Filed Under: candice odgers, child safety, podcast, social media