coppa – Techdirt (original) (raw)
Ctrl-Alt-Speech: I Bet You Think This Block Is About You
from the ctrl-alt-speech dept
Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.
Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.
IIn this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:
- Jim Jordan Demands Advertisers Explain Why They Don’t Advertise On MAGA Media Sites (Techdirt)
- TikTok Has a Nazi Problem (Wired)
- NazTok: An organized neo-Nazi TikTok network is getting millions of views (Institute for Strategic Dialogue)
- How TikTok bots and AI have powered a resurgence in UK far-right violence (The Guardian)
- Senate Passes Child Online Safety Bill, Sending It to an Uncertain House Fate (New York Tmes)
- The teens lobbying against the Kids Online Safety Act (The Verge)
- Social Media Mishaps Aren’t Always Billionaire Election Stealing Plots (Techdirt)
- X suspends ‘White Dudes for Harris’ account after massive fundraiser (Washington Post)
- Why Won’t Google Auto-complete ‘Trump Assassination Attempt’? (Intelligencer)
- ‘Technical glitch’ is no longer an excuse (Everything in Moderation from 2020)
- A message to our Black community (TikTok from 2020)
This episode is brought to you with financial support from the Future of Online Trust & Safety Fund, and by our sponsor Discord. In our Bonus Chat at the end of the episode, Mike speaks to Juliet Shen and Camille Francois about the Trust & Safety Tooling Consortium at Columbia School of International and Public Affairs, and the importance of open source tools for trust and safety.
Filed Under: child safety, content moderation, coppa, jim jordan, kosa, social media
Companies: google, tiktok, twitter, x
FTC Oversteps Authority, Demands Unconstitutional Age Verification & Moderation Rules
from the is-that-allowed? dept
Call me crazy, but I don’t think it’s okay to go beyond what the law allows even in pursuit of “good” intentions. It is consistently frustrating how this FTC continues to push the boundaries of its own authority, even when the underlying intentions may be good. The latest example is in its order against a sketchy messaging app, it has demanded things it’s not clear that it can order.
This has been a frustrating trend with this FTC. Making sure the market is competitive is a good thing, but bringing weak and misguided cases makes a mockery of its antitrust power. Getting rid of non-competes is a good thing, but the FTC doesn’t have the authority to do so.
Smacking down sketchy anonymous messaging apps preying on kids is also a good thing, but once again, the FTC seems to go too far. A few weeks ago, the FTC announced an order against NGL Labs, a very sketchy anonymous messaging app that was targeting kids and leading to bullying.
It certainly appears that the app was violating some COPPA rules on data collection for sites targeting kids. And it also appears that the app’s founders were publicly misrepresenting aspects of the app, as well as hiding that when they charged you, users were actually signing up for a weekly subscription. So I have no issues with the FTC going after the company for those things. Those are the kinds of actions the FTC should be taking.
The FTC’s description highlights at least some of the sketchiness behind the app:
After consumers downloaded the NGL app, they could share a link on their social media accounts urging their social media followers to respond to prompts such as “If you could change anything about me, what would it be?” Followers who clicked on this link were then taken to the NGL app, where they could write an anonymous message that would be sent to the consumer.
After failing to generate much interest in its app, NGL in 2022 began automatically sending consumers fake computer-generated messages that appeared to be from real people. When a consumer posted a prompt inviting anonymous messages, they would receive computer-generated fake messages such as “are you straight?” or “I know what you did.” NGL used fake, computer-generated messages like these or others—such as messages regarding stalking—in an effort to trick consumers into believing that their friends and social media contacts were engaging with them through the NGL App.
When a user would receive a reply to a prompt—whether it was from a real consumer or a fake message—consumers saw advertising encouraging them to buy the NGL Pro service to find out the identity of the sender. The complaint alleges, however, that consumers who signed up for the service, which cost as much as $9.99 a week, did not receive the name of the sender. Instead, paying users only received useless “hints” such as the time the message was sent, whether the sender had an Android or iPhone device, and the sender’s general location. NGL’s bait-and-switch tactic prompted many consumers to complain, which NGL executives laughed off, dismissing such users as “suckers.”
In addition, the complaint alleges that NGL violated the Restore Online Shoppers’ Confidence Act by failing to adequately disclose and obtain consumers’ consent for such recurring charges. Many users who signed up for NGL Pro were unaware that it was a recurring weekly charge, according to the complaint.
But just because the app was awful, the founders behind it were awful, and it seems clear they violated some laws, does not mean any and all remedies are open and appropriate.
And here, the FTC is pushing for some remedies that are likely unconstitutional. First off, it requires age verification and blocking all kids under the age of 18.
- Required to implement a neutral age gate that prevents new and current users from accessing the app if they indicate that they are under 18 and to delete all personal information that is associated with the user of any messaging app unless the user indicates they are over 13 or NGL’s operators obtain parental consent to retain such data;
But, again, most courts have repeatedly made clear that government-mandated age verification or age-gating is unconstitutional on the internet. The Supreme Court just agreed to hear yet another case on this point, but it’s still a weird choice for the FTC to demand this here, knowing that the issue could end up before a hostile Supreme Court.
On top of that, as Elizabeth Nolan Brown points out at Reason, it appears that some of the other things the FTC are mad about regarding NGL is simply that offering anonymous communications tools to kids is somehow inherently harmful behavior that shouldn’t be allowed:
“The anonymity provided by the app can facilitate rampant cyberbullying among teens, causing untold harm to our young people,” Los Angeles District Attorney George Gascón said in a statement.
“NGL and its operators aggressively marketed its service to children and teens even though they were aware of the dangers of cyberbullying on anonymous messaging apps,” the FTC said.
Of course, plenty of apps allow for anonymity. That this has the potential to lead to bullying can’t be grounds for government action.
So, yes, I think the FTC can call out violating COPPA and take action based on that, but I don’t see how they can legitimately force the app to age gate at a time when multiple courts have already said the government cannot mandate such a thing. And they shouldn’t be able to claim that anonymity itself is somehow obviously problematic, especially at time when studies often suggest the opposite for some kids who need their privacy.
The other problematic bit is that the FTC is mad that NGL may have overstated their content moderation abilities. The FTC seems to think that it can legally punish the company for not living up to the FTC’s interpretation of NGL’s moderation promises. From the complaint itself:
Defendants represent to the public the NGL App is safe for children and teens to use because Defendants utilize “world class AI content moderation” including “deep learning and rule-based character pattern-matching algorithms” in order to “filter out harmful language and bullying.” Defendants further represent that they “can detect the semantic meaning of emojis, and [] pull[] specific examples of contextual emoji use” allowing them to “stay on trend, [] understand lingo, and [] know how to filter out harmful messages.”
In reality however, Defendants’ representations are not true. Harmful language and bullying, including through the use of emojis, are commonplace in the NGL App—a fact of which Defendants have been made aware through numerous complaints from users and their parents. Media outlets have reported on these issues as well. For example, one media outlet found in its testing of the NGL App that the App’s “language filters allowed messages with more routine bullying terms . . . including the phrases ‘You’re fat,’ ‘Everyone hates you,’ ‘You’re a loser’ and ‘You’re ugly.’” Another media outlet reported that it had found that “[t]hreatening messages with emojis that could be considered harmful like the knife and dagger icon were not blocked.” Defendants reviewed several of these media articles, yet have continued to represent that the NGL App is “safe” for children and teens to use given the “world class AI content moderation” that they allegedly employ.
I recognize that some people may be sympathetic to the FTC here. It definitely looks like NGL misrepresented the power of their moderation efforts. But there have been many efforts by governments or angry users to sue companies whenever they feel that they have not fully lived up to public marketing statements regarding their moderation.
People have sued companies like Facebook and Twitter for being banned, arguing that public statements about “free speech” by those companies meant that they shouldn’t have been banned. How is this not any different than that?
And the FTC’s claim here that “if you promise your app is safe” and then someone can find “harmful language and bullying” on the platform, that you’ve then violated the law just flies in the face of everything we just heard from the Supreme Court in the Moody case.
The FTC doesn’t get to be the final arbiter of whether or not a company successfully moderates away unsafe content. If so, it will be subject to widespread abuse. Just think of whichever presidential candidate you dislike the most, and what would happen if they could have their FTC investigate any platform they dislike for not fairly living up to their public promises on moderation.
It would be a dangerous, free speech-attacking mess.
Yes, NGL seems like a horrible company, run by horrible people. But go after them on the basics: the data collection in violation of COPPA and the sneaky subscription charging. Not things like age verification and content moderation.
Filed Under: age gating, anonymity, content moderation, coppa, ftc
Companies: ngl
Bipartisan Group Of Senators Introduce New Terrible ‘Protect The Kids Online’ Bill
from the not-another-one dept
Apparently, the world needs even more terrible bills that let ignorant senators grandstand to the media about how they’re “protecting the kids online.” There’s nothing more serious to work on than that. The latest bill comes from Senators Brian Schatz and Ted Cruz (with assists from Senators Chris Murphy, Katie Britt, Peter Welch, Ted Budd, John Fetterman, Angus King, and Mark Warner). This one is called the “The Kids Off Social Media Act” (KOSMA) and it’s an unconstitutional mess built on a long list of debunked and faulty premises.
It’s especially disappointing to see this from Schatz. A few years back, I know his staffers would regularly reach out to smart people on tech policy issues in trying to understand the potential pitfalls of the regulations he was pushing. Either he’s no longer doing this, or he is deliberately ignoring their expert advice. I don’t know which one would be worse.
The crux of the bill is pretty straightforward: it would be an outright ban on social media accounts for anyone under the age of 13. As many people will recognize, we kinda already have a “soft” version of that because of COPPA, which puts much stricter rules on sites directed at those under 13. Because most sites don’t want to deal with those stricter rules, they officially limit account creation to those over the age of 13.
In practice, this has been a giant mess. Years and years ago, Danah Boyd pointed this out, talking about how the “age 13” bit is a disaster for kids, parents, and educators. Her research showed that all this generally did was to have parents teach kids that “it’s okay to lie,” as parents wanted kids to use social media tools to communicate with grandparents. Making that “soft” ban a hard ban is going to create a much bigger mess and prevent all sorts of useful and important communications (which, yeah, is a 1st Amendment issue).
Schatz’s reasons put forth for the bill are just… wrong.
No age demographic is more affected by the ongoing mental health crisis in the United States than kids, especially young girls. The Centers for Disease Control and Prevention’s Youth Risk Behavior Survey found that 57 percent of high school girls and 29 percent of high school boys felt persistently sad or hopeless in 2021, with 22 percent of all high school students—and nearly a third of high school girls—reporting they had seriously considered attempting suicide in the preceding year.
Gosh. What was happening in 2021 with kids that might have made them feel hopeless? Did Schatz and crew simply forget about the fact that most kids were under lockdown and physically isolated from friends for much of 2021? And that there were plenty of other stresses, including millions of people, including family members, dying? Noooooo. Must be social media!
Studies have shown a strong relationship between social media use and poor mental health, especially among children.
Note the careful word choice here: “strong relationship.” They won’t say a causal relationship because studies have not shown that. Indeed, as the leading researcher in the space has noted, there continues to be no real evidence of any causal relationship. The relationship appears to work the other way: kids who are dealing with poor mental health and who are desperate for help turn to the internet and social media because they’re not getting help elsewhere.
Maybe offer a bill that helps kids get access to more resources that help them with their mental health, rather than taking away the one place they feel comfortable going? Maybe?
From 2019 to 2021, overall screen use among teens and tweens (ages 8 to 12) increased by 17 percent, with tweens using screens for five hours and 33 minutes per day and teens using screens for eight hours and 39 minutes.
I mean, come on Schatz. Are you trolling everyone? Again, look at those dates. WHY DO YOU THINK that screen time might have increased 17% for kids from 2019 to 2021? COULD IT POSSIBLY BE that most kids had to do school via computers and devices at home, because there was a deadly pandemic making the rounds?
Maybe?
Did Schatz forget that? I recognize that lots of folks would like to forget the pandemic lockdowns, but this seems like a weird way to manifest that.
I mean, what a weird choice of dates to choose. I’m honestly kind of shocked that the increase was only 17%.
Also, note that the data presented here isn’t about an increase in social media use. It could very well be that the 17% increase was Zoom classes.
Based on the clear and growing evidence, the U.S. Surgeon General issued an advisory last year, calling for new policies to set and enforce age minimums and highlighting the importance of limiting the use of features, like algorithms, that attempt to maximize time, attention, and engagement.
Wait. You mean the same Surgeon General’s report that denied any causal link between social media and mental health (which you falsely claim has been proved) and noted just how useful and important social media is to many young people?
From that report, which Schatz misrepresents:
Social media can provide benefits for some youth by providing positive community and connection with others who share identities, abilities, and interests. It can provide access to important information and create a space for self-expression. The ability to form and maintain friendships online and develop social connections are among the positive effects of social media use for youth. , These relationships can afford opportunities to have positive interactions with more diverse peer groups than are available to them offline and can provide important social support to youth. The buffering effects against stress that online social support from peers may provide can be especially important for youth who are often marginalized, including racial, ethnic, and sexual and gender minorities. , For example, studies have shown that social media may support the mental health and well-being of lesbian, gay, bisexual, asexual, transgender, queer, intersex and other youths by enabling peer connection, identity development and management, and social support. Seven out of ten adolescent girls of color report encountering positive or identity-affirming content related to race across social media platforms. A majority of adolescents report that social media helps them feel more accepted (58%), like they have people who can support them through tough times (67%), like they have a place to show their creative side (71%), and more connected to what’s going on in their friends’ lives (80%). In addition, research suggests that social media-based and other digitally-based mental health interventions may also be helpful for some children and adolescents by promoting help-seeking behaviors and serving as a gateway to initiating mental health care.
Did Schatz’s staffers just, you know, skip over that part of the report or nah?
The bill also says that companies need to not allow algorithmic targeting of content to anyone under 17. This is also based on a widely believed myth that algorithmic content is somehow problematic. No studies have legitimately shown that of current algorithms. Indeed, a recent study showed that removing algorithmic targeting leads to people being exposed to more disinformation.
Is this bill designed to force more disinformation on kids? Why would that be a good idea?
Yes, some algorithms can be problematic! About a decade ago, algorithms that tried to optimize solely for “engagement” definitely created some bad outcomes. But it’s been a decade since most such algorithms have been designed that way. On most social media platforms, the algorithms are designed in other ways, taking into account a variety of different factors, because they know that optimizing just on engagement leads to bad outcomes.
Then the bill tacks on Cruz’s bill to require schools to block social media. There’s an amusing bit when reading the text of that part of the law. It says that you have to block social media on “federally funded networks and devices” but also notes that it does not prohibit “a teacher from using a social media platform in the classroom for educational purposes.”
But… how are they going to access those if the school is required by law to block access to such sites? Most schools are going to do a blanket ban, and teachers are going to be left to do what? Show kids useful YouTube science videos on their phones? Or maybe some schools will implement a special teacher code that lets them bypass the block. And by the end of the first week of school half the kids in the school will likely know that password.
What are we even doing here?
Schatz has a separate page hyping up the bill, and it’s even dumber than the first one above. It repeats some of the points above, though this time linking to Jonathan Haidt, whose work has been trashed left, right, and center by actual experts in this field. And then it gets even dumber:
Big Tech knows it’s complicit – but refuses to do anything about it…. Moreover, the platforms know about their central role in turbocharging the youth mental health crisis. According to Meta’s own internal study, “thirty-two percent of teen girls said that when they felt bad about their bodies, Instagram made them feel worse.” It concluded, “teens blame Instagram for increases in the rate of anxiety and depression.”
This is not just misleading, it’s practically fraudulent misrepresentation. The study Schatz is citing is one that was revealed by Frances Haugen. As we’ve discussed, it was done because Meta was trying to understand how to do better. Indeed, the whole point of that study was to see how teens felt about using social media in 12 different categories. Meta found that most boys felt neutral or better about themselves in all 12 categories. For girls, it was 11 out of 12. It was only in one category, body image, where the split was more pronounced. 32% of girls said that it made them feel worse. Basically the same percentage said it had no impact, or that it made them feel better.
Also, look at that slide’s title. The whole point of this study was to figure out if they were making kids feel worse in order to look into how to stop doing that. And now, because grandstanders like Schatz are falsely claiming that this proves they were “complicit” and “refuse to do anything about it,” no social media company will ever do this kind of research again.
Because, rather than proactively looking to see if they’re creating any problems that they need to try to fix, Schatz and crew are saying “simply researching this is proof that you’re complicit and refuse to act.”
Statements like this basically ensure that social media companies stick their heads in the sand, rather than try to figure out where harm might be caused and take steps to stop that harm.
Why would Schatz want to do that?
That page then also falsely claims that the bill does not require age verification. This is a silly two-step that lying politicians claim every time they do this. Does it directly mandate age verification? No. But, by making the penalties super serious and costly for failing to stop kids from accessing social media that will obviously drive companies to introduce stronger age verification measures that are inherently dangerous and an attack on privacy.
Perhaps Schatz doesn’t understand this, but it’s been widely discussed by many of the experts his staff used to talk to. So, really, he has no excuse.
The FAQ also claims that the bill will pass constitutional muster, while at the same time admitting that they know there will be lawsuits challenging it:
Yes. As, for example, First Amendment expert Neil Richards explains, “[i]nstead of censoring the protected expression present on these platforms, the act takes aim at the procedures and permissions that determine the time, place and manner of speech for underage consumers.” The Supreme Court has long held that the government has the right to regulate products to protect children, including by, for instance, restricting the sale of obscene content to minors. As Richards explains: “[i]n the same way a crowded bar or nightclub is no place for a child on their own”—or in the way every state in the country requires parental consent if it allows a minor to get a tattoo—“this rule would set a reasonable minimum age and maturity limitation for social media customers.”
While we expect legal challenges to any bill aimed at regulating social media companies, we are confident that this content-neutral bill will pass constitutional muster given the government interests at play.
There are many reasons why this is garbage under the law, but rather than breaking them all down (we’ll wait for judges to explain it in detail), I’ll just point out the major tell is in the law itself. In the definition of what a “social media platform” is in the law, there is a long list of exceptions of what the law does not cover. It includes a few “moral panics of yesteryear” that gullible politicians tried to ban and were found to have violated the First Amendment in the process.
It explicitly carves out video games and content that is professionally produced, rather than user-generated:
Remember the moral panics about video games and TV destroying kids’ minds? Yeah. So this child protection bill is hasty to say “but we’re not banning that kind of content!” Because whoever drafted the bill recognized that the Supreme Court has already made it clear that politicians can’t do that for video games or TV.
So, instead, they have to pretend that social media content is somehow on a whole different level.
But it’s not. It’s still the government restricting access to content. They’re going to pretend that there’s something unique and different about social media, and that they’re not banning the “content” but rather the “place” and “manner” of accessing that content. Except that’s laughable on its face.
You can see that in the quote above where Schatz does the fun dance where he first says “it’s okay to ban obscene content to minors” and then pretends that’s the same as restrictions on access to a bar (it’s not). One is about the content, and one is about a physical place. Social media is all about the content, and it’s not obscene content (which is already an exception to the First Amendment).
And, the “parental consent” for tattoos… I mean, what the fuck? Literally 4 questions above in the FAQ where that appears Schatz insists that his bill has nothing about parental consent. And then he tries to defend it by claiming it’s no different than parental consent laws?
The FAQ also claims this:
This bill does not prevent LGBTQ+ youth from accessing relevant resources online and we have worked closely with LGBTQ+ groups while crafting this legislation to ensure that this bill will not negatively impact that community.
I mean, it’s good you talked to some experts, but I note that most of the LGBTQ+ groups I’m aware of are not listed on your list of “groups supporting the bill” on the very same page. That absence stands out.
And, again, the Surgeon General’s report that you misleadingly cited elsewhere highlights how helpful social media can be to many LGBTQ+ youth. You can’t just say “nah, it won’t harm them” without explaining why all those benefits that have been shown in multiple studies, including the Surgeon General’s report, somehow don’t get impacted.
There’s a lot more, but this is just a terrible bill that would create a mess. And, I’m already hearing from folks in DC that Schatz is trying to get this bill added to the latest Christmas tree of a bill to reauthorize the FAA.
It would be nice if we had politicians looking to deal with the actual challenges facing kids these days, including the lack of mental health support for those who really need it. Instead, we get unconstitutional grandstanding nonsense bills like this.
Everyone associated with this bill should feel ashamed.
Filed Under: 1st amendment, age verification, brian schatz, chris murphy, coppa, john fetterman, jonathan haidt, katie britt, kids, kids off social media act, kosma, mark warner, peter welch, social media, ted cruz, teens
Congress Pretends It’s Fixed All The Problems With KOSA; It Hasn’t
from the you-can't-fix-what's-fundamentally-broken dept
On Wednesday, the Senate revealed an amended version of the Kids Online Safety Act (KOSA) for today’s hearing over the bill. One would hope with so much public pushback over the bill, they might do something crazy like trying to fix the bill.
That is, apparently, way too much to ask.
Earlier today, the Senate held its markup on the bill, in which numerous Senators from both parties pretended they had listened to the concerns people had about KOSA and “fixed” them. They did not. There was misleading talk about “protecting the children” (the bill will put them at greater risk) and a bunch of other nonsense. Lots of Senators then tried to amend KOSA further to add their own pet (usually unconstitutional) “save the children” bills to it.
Then they moved forward with the bill so that it might come to a floor vote if Chuck Schumer decides that it’s really time to destroy the internet.
TechFreedom’s Ari Cohn has an incredibly thorough breakdown of the current version of the bill and its myriad problems. For example, in response to concerns that KOSA would require age verification, the amended KOSA basically adds a “nuh uh, we have no such requirement” into the bill. But then creates an impossibly (and constitutionally) vague standard:
As we wrote last year, KOSA’s original language would have effectively required covered platforms to verify the age and thus the identity of every user. KOSA’s revised text attempts to avoid this First Amendment problem by requiring covered platforms to protect only those users that it has “actual knowledge or knowledge fairly implied on the basis of objective circumstances” are minors. Furthermore, new rules of construction say that the bill does not require platforms to collect age-related data or perform age verification. While doubtless well-intentioned, these changes merely trade a clear, explicit mandate for a vague, implicit one; the unconstitutional effect on anonymous expression will be the same.
It is entirely unclear what constitutes “knowledge fairly implied” that a particular user is a minor. In an enforcement action, the Federal Trade Commission must consider the “totality of the circumstances,” which includes, but is not limited to, “whether the operator, using available technology, exercised reasonable care.” Vague as this provision is, it apparently does not apply to civil suits brought by state attorneys general, which could give them even more unpredictable discretion.
Basically, the new KOSA softens the language that previously appeared to effectively force companies into adopting age verification, and instead basically says that the FTC gets to set the standards for what “knowledge fairly implied” is regarding if there are children on the site. This is basically giving a massive, and easily abused, power to the FTC.
As Cohn notes, this provision might also be a sort of ticking time-bomb for encryption, allowing the FTC to say the use of encryption is an evasion tool, and also to extend its authority way beyond what’s reasonable:
Thus, one can only speculate as to how this key term would be interpreted. This uncertainty alone makes age verification the most risk-averse, “reasonable” course of action for platforms—especially with respect to end-to-end-encrypted services. Both the FTC and state attorneys general will likely draw interpretive cues from COPPA, which requires parental consent for the “collection, use, or disclosure of personal information from children” when a service has actual knowledge that a user is under 13 years old or when the service is “directed to” children under 13—effectively, the service has constructive knowledge that its users are highly likely to be minors. To date, COPPA has had negligible effects on adults because services directed to children under 13 are unlikely to be used by anyone other than children due to their limited functionality, effectively mandated by COPPA. But extending COPPA’s framework to sites “directed to” older teens would significantly burden the speech of adults because the social media services and games that older teens use are largely the same ones used by adults.
The FTC recently began to effectively extend COPPA to cover teens. Whether or not the FTC Act gives the Commission such authority, this example illustrates what the FTC—and state attorneys general—might do with the broad language of KOSA. In a 2022 enforcement action, the FTC alleged that Epic Games had committed an unfair trade practice by allowing users of its popular Fortnite game to chat with other users, despite knowing that “a third of Fortnite players, based on social media data, are teens aged 13-17.” While the complaint focused on Epic Games’ use of its audience composition for marketing purposes, its logic could establish “knowledge fairly implied” under KOSA. This complaint was remarkable not only for extending COPPA to teens but also because the FTC had effectively declared a threshold above which it would consider a site “directed to” them—something the FTC had never done for sites “directed to” minors under COPPA.
The revised bill also fails to fix KOSA biggest, worst, and central problem: the “duty of care” concept. As we’ve been explaining for years, the “duty of care” concept is dangerous. It is, effectively, the way the Chinese Great Firewall worked originally. Rather than telling websites what couldn’t be posted, they were basically told that if they allowed anything later deemed to be bad, then they would get fined. The end result? Massive overblocking.
The “duty of care” concept is effectively the same mechanism. Because, if anyone using your site somehow gets harmed (loosely defined), then blame can flow back to the site for not preventing it, even if the website had no visibility or understanding or direct connection to that harm. Instead, it will be argued that the site had a magical “duty of care” to see into the future to figure out how some random usage might have resulted in harm, and to have magically gone back in time to prevent it.
This is nothing but scapegoat bait. Anyone does anything bad and talks about it on the internet? You get to sue the internet companies! The end result, of course, is that internet companies are going to sterilize the shit out of the internet, taking down any conversation about anything controversial, or that might later be tied to some sort of random harm.
As Cohn explains:
This duty of care directly requires platforms to protect against the harmful effects of speech, the overwhelming majority of which is constitutionally protected. As we explained in last year’s letter, courts have held that imposing a duty to protect against harms caused by speech violates the First Amendment.
The unconstitutionality of KOSA’s duty of care is highlighted by its vague and unmeetable nature. Platforms cannot “prevent and mitigate” the complex psychological issues that arise from circumstances across an individual’s entire life, which may manifest in their online activity. These circumstances mean that material harmful to one minor may be helpful or even lifesaving to another, particularly when it concerns eating disorders, self-harm, drug use, and bullying. Minors are individuals, with differing needs, emotions, and predispositions. Yet KOSA would require platforms to undertake an unworkable one-size-fits-all approach to deeply personal issues, thus ultimately serving the best interests of no minors.
This is such a key point. Studies have shown, repeatedly, that content around eating disorders and mental health impacts people in very different ways. For some people, some kinds of content can be helpful. But the very same kind of content can be harmful for others. How do you prevent that? The answer is that you prevent all “controversial” content, which does significant harm to the people who were previously getting help through that content.
For example, within the eating disorder world, it’s common for services to intersperse content about healthy eating practices, or guiding people towards sources of information intended to help them find help. But some people argue that even seeing that content could trigger some to engage in dangerous activity. So, now even something as simple as a PSA about eating properly could fail the test under KOSA.
And, of course, the whole thing is wildly unconstitutional:
The Supreme Court has upheld the FCC’s authority to regulate indecency on broadcast media, reasoning that children have easy access to broadcasts, and the nature of the medium makes it impossible to “completely protect . . . from unexpected program content.” But even so, the courts have consistently held that imposing a duty of care on broadcasters to protect minors would violate the First Amendment. There can be no doubt that imposing a duty of care against online platforms, over which the government has far less regulatory authority, is still more obviously unconstitutional.
I know that the Senators insist this amended version of KOSA magically “fixed” all the problems, but the problems seem fundamental to the bill — and to the grandstanding politicians pushing it.
Filed Under: 1st amendment, coppa, duty of care, ftc, knowledge, knowledge fairly implied, kosa
NetChoice Challenges Yet Another Ridiculously Bad State Internet Law
from the verify-this dept
NetChoice has been quite busy the last few years suing to stop a wide variety of terrible state laws designed to mess up parts of the internet. It took on Florida’s social media content moderation law and won (twice). It took on Texas’ social media content moderation law and won at the district court, and absolutely ridiculously lost at the 5th Circuit (something that we now need to hope the Supreme Court will fix next term). It sued California over its Age Appropriate Design Code.
And, now it’s sued Arkansas over its terrible, ridiculous, and extremely problematic social media age verification law. As we noted, Arkansas passed a bill to remove age verification for kids working in meat packing plants in the same session that it insisted kids needed age verification to use social media. Just incredible.
The NetChoice complaint lays out how we’ve been here before, with moral panic laws about “protecting the children” from this or that new media, and every time they’ve been struck down as unconstitutional infringements on people’s rights.
Arkansas Senate Bill 396 is the latest attempt in a long line of government efforts to restrict new forms of expression based on concerns that they harm minors. Books, movies, television, rock music, video games, and the Internet have all been accused in the past of exposing youth to content that has deleterious effects. But the U.S. Supreme Court has repeatedly held that, while the government undoubtedly possesses “legitimate power to protect children from harm,” “that does not include a free-floating power to restrict the ideas to which children may be exposed.” Brown v. Entm’t Merchants Ass’n, 564 U.S. 786, 794-95 (2011). “Speech that is neither obscene as to youths nor subject to some other legitimate proscription cannot be suppressed solely to protect the young from ideas or images that a legislative body thinks unsuitable for them.” Erznoznik v. City of Jacksonville, 422 U.S. 205, 213-14 (1975).
Accordingly, government efforts to restrict minors from accessing such materials, including by requiring parental consent to do so, have reliably and repeatedly been struck down, especially when (as is often the case) they impede the First Amendment rights of adults too. See, e.g., Brown, 564 U.S. at 794-95 (invalidating law prohibiting distribution of violent video games to minors without parental consent); Ashcroft v. Am. Civil Liberties Union, 542 U.S. 656 (2004) (enjoining law restricting access to sexually explicit materials on the Internet); Reno v. Am. Civil Liberties Union, 521 U.S. 844 (1997) (invalidating earlier law enacted to protect minors from “indecent” and “patently offensive” communications on the Internet); United States v. Playboy Entm’t Grp., Inc., 529 U.S. 803 (2000) (invalidating law restricting sexual programing on television); Erznoznik, 422 U.S. at 213-14 (invalidating law prohibiting display of movies containing nudity at drive-in theaters); Interactive Digital Software Ass’n v. St. Louis Cnty., 329 F.3d 954 (8th Cir. 2003) (invalidating ordinance prohibiting distribution of violent video games to minors without parental consent); Video Software Dealers Ass’n v. Webster, 968 F.2d 684 (8th Cir. 1992) (invalidating law prohibiting distribution to minors of videos depicting certain types of violence).
And then it explains the many constitutional issues with this bill in particular:
S.B. 396 should meet the same fate. The Act purports to protect minors from alleged harmful effects of “social media” by requiring the companies that operate these services to verify that any person seeking to create an account is at least 18 years old or has parental consent to create an account. By restricting the access of minors—and adults (who now have to prove their age)—to these ubiquitous online services, Arkansas has “with one broad stroke” burdened access to what for many are the principal sources for speaking and listening, learning about current events, “and otherwise exploring the vast realms of human thought and knowledge.” Packingham v. North Carolina, 582 U.S. 98, 107 (2017)
Worse still, the Act does so by drawing a slew of content-, speaker-, and viewpointbased distinctions—making clear that its purpose and effect is “to restrict the ideas to which children may be exposed” and “protect the young from ideas or images that a legislative body thinks unsuitable for them.” Brown, 564 U.S. at 794-95. S.B. 396 restricts access to a website that permits users to share videos of their newest dance moves or other acts of entertainment, but not to a website that provides career development opportunities. Minors may readily access websites that provide news, sports, entertainment, and online shopping, but not those that allow them to upload their favorite recipes or pictures of their latest travels or athletic exploits. Moreover, the Act does not even restrict access to supposedly harmful content in any sensible way: It arguably applies to Facebook and Twitter, but not Mastadon, Discord, BeReel, Gab, Truth Social, Imgur, Brainly, DeviantArt, or Twitch. The Act thus appears to restrict access to political expression on Twitter and photography on Instagram but places no restrictions on the exact same expression on Truth Social or DeviantArt. While the state might think that some of those distinctions are sensible, the Supreme Court has long recognized that it is not the role of the government to decide what expressive materials minors should be allowed to access. The First Amendment leaves “these judgments … for the individual to make, not for the Government to decree.” Playboy, 529 U.S. at 818. And compounding the problems, the Act’s definitions of “social media company” and “social media platform” are hopelessly vague, leaving companies to guess whether they are regulated by the Act. The Act is also preempted in part by the Child Online Privacy Protection Act and contravenes the Commerce Clause too.
I won’t go through the complaint beat by beat, but this paragraph stood out to me:
There has long been some speech that many adults would prefer minors not hear. Some are opposed to minors reading The Adventures of Huckleberry Finn because it contains racial epithets. Others object to minors playing Grand Theft Auto because it depicts violence and criminality. Even so, the government typically cannot require certain works to be kept in an “adults only” section of the mall just because it deems them controversial, or require minors to receive permission from their parents before buying works that carry messages that the government deems too sophisticated for them. See, e.g., Brown, 564 U.S. at 786. Even when it comes to efforts to protect minors, the First Amendment commands that “esthetic and moral judgments about art and literature” and other forms of speech and expression “are for the individual to make, not for the Government to decree.”
Your moral panic is no excuse for the government to decide what is right for kids to read and when. It’s an issue for parents directly in some cases, and in other areas, it needs to be recognized that kids have rights themselves. Laws like these insert the government instead, and that’s a problem.
One hopes that the Supreme Court will eventually put a stop to all of these crazy state internet laws (which come from both red and blue states). In the meantime, though, state legislators seem to be pushing each other aside to see who can destroy the rights of kids (and adults) faster and more ridiculously.
Filed Under: age verification, arkansas, coppa, kids, protect the children, sb 396
Companies: netchoice
Epic To Pay $520 Million Over Deceptive Practices To Trick Kids
from the before-we-pass-new-laws... dept
Maybe, just maybe, before we rush to pass questionable new laws about “protecting children online,” we should look to make use of the old ones? The Children’s Online Privacy Protection Act (COPPA) has been in place for years, and it has problems, but so many companies ignore it. I’ve mentioned in the past how I once walked around a part of CES that had a bunch of startups focused on offering services to kids, and a DC lawyer I was with made sure to ask each one what their COPPA compliance strategy was… and we just got blank stares.
On Monday, video game giant Epic agreed to pay $520 million in two separate fines to the FTC for violating COPPA with some pretty deceptive behavior targeted at kids. First up were your garden variety privacy violations in collecting data on kids under 13 without obtaining parental consent:
- Violated COPPA by Failing to Notify Parents, Obtain Consent: The FTC alleged that Epic was aware that many children were playing Fortnite—as shown through surveys of Fortnite users, the licensing and marketing of Fortnite toys and merchandise, player support and other company communications—and collected personal data from children without first obtaining parents’ verifiable consent. The company also required parents who requested that their children’s personal information be deleted to jump through unreasonable hoops, and sometimes failed to honor such requests.
- Default settings harm children and teens: Epic’s settings enable live on-by-default text and voice communications for users. The FTC alleges that these default settings, along with Epic’s role in matching children and teens with strangers to play Fortnite together, harmed children and teens. Children and teens have been bullied, threatened, harassed, and exposed to dangerous and psychologically traumatizing issues such as suicide while on Fortnite.
As the FTC notes, Epic employees knew this was a problem, but the company didn’t fix things. In fact, when it finally did create an option to turn the voice chat off, “Epic made it difficult for users to find.”
Perhaps more concerning were the deceptive practices.
- Used dark patterns to trick users into making purchases: The company has deployed a variety of dark patterns aimed at getting consumers of all ages to make unintended in-game purchases. Fortnite’s counterintuitive, inconsistent, and confusing button configuration led players to incur unwanted charges based on the press of a single button. For example, players could be charged while attempting to wake the game from sleep mode, while the game was in a loading screen, or by pressing an adjacent button while attempting simply to preview an item. These tactics led to hundreds of millions of dollars in unauthorized charges for consumers.
- Charged account holders without authorization: Children and other users who play Fortnite can purchase in-game content such as cosmetics and battle passes using Fortnite’s V-Bucks. Up until 2018, Epic allowed children to purchase V-Bucks by simply pressing buttons without requiring any parental or card holder action or consent. Some parents complained that their children had racked up hundreds of dollars in charges before they realized Epic had charged their credit card without their consent. The FTC has brought similar claims against companies such as Amazon, Apple, and Google for billing consumers millions of dollars for in-app purchases made by children while playing mobile app games without obtaining their parents’ consent.
- Blocked access to purchased content: The FTC alleged that Epic locked the accounts of customers who disputed unauthorized charges with their credit card companies. Consumers whose accounts have been locked lose access to all the content they have purchased, which can total thousands of dollars. Even when Epic agreed to unlock an account, consumers were warned that they could be banned for life if they disputed any future charges.
I generally dislike the term “dark patterns,” as it’s frequently used to basically just mean “making a service useful in a way that someone else dislikes.” But, uh, yeah, making it way too easy for kids to rack up huge bills on their parents’ credit cards? That seems super sketchy.
For this behavior, Epic will pay $245 million, which the government will use to provide rebates for Fortnite players.
This all seems like the kind of thing that the FTC should be doing, rather than some of the other sillier things it’s been focused on of late. And, also, again, suggests that maybe we don’t need these new, badly drafted laws, but instead should just make sure the FTC is able to better enforce existing laws.
Filed Under: children, coppa, dark patterns, fees, fortnite, ftc, kids, parents, privacy
Companies: epic games
Dear California Law Makers: How The Hell Can I Comply With Your New Age-Appropriate Design Code?
from the stop-the-nonsense dept
I really don’t have time for this kind of thing, but I wanted to pass along that it appears that the California legislature is very, very close to passing AB 2273, “The California Age-Appropriate Design Code Act.” As far as I can tell, it has strong support in the legislature and very little opposition. And that’s incredibly dangerous, because the bill is not just extremely problematic, but at the same time it’s also impossible to comply with. I hope Governor Newsom will veto it, but it seems unlikely.
Earlier this year, Professor Eric Goldman provided a long and detailed analysis of the many, many problems with the bill. As far as I can tell, since then, the California legislature has made a few adjustments to the bill, none of which fix any of Professor Goldman’s concerns (one pretends to), and some of which make them worse — and also create serious 1st Amendment problems for this bill. Oh, yeah, and they also carved out the one set of businesses with the longest record of actually abusing consumer privacy: telcos and broadband providers. Hilarious. It is astounding to me that the legislature appears to have just wholly ignored all of Goldman’s clearly laid out and explained problems with the bill.
This is a bill that, as is all too typical from politicians these days, insists that there’s a problem — without the evidence to back it up — and then demands an impossible solution that wouldn’t actually fix the problem if it were a problem. It’s the ultimate in moral panic lawmaking.
The bill is a “for the children” bill in that it has lots of language in there claiming that this is about protecting children from nefarious online services that create “harm.” But, as Goldman makes clear, the bill targets everyone, not just children, because it has ridiculously broad definitions.
We already have a federal law that seeks to protect children’s data online, the Children’s Online Privacy Protection Act (COPPA). It has serious problems, but this bill doesn’t fix any of those problems, it treats those problems as features and expands on them massively. COPPA — sensibly — applies to sites that are targeted towards those under 13. This has had some problematic side effects, including that every major site restricts the age of their users to over 13. And that’s even though many of those sites are useful for people under the age of 13 — and the way everyone deals with this is by signing up their children for these services and lying about their age. Literally a huge impact of COPPA is teaching children to lie. Great stuff.
But 2273 doesn’t limit its impact to sites targeting those under 13. It targets any business with an online service “likely to be accessed by children” who are defined by “a consumer or consumers who are under 18 years of age.” I’m curious if that means someone who is not buying (i.e., “consuming”) anything doesn’t count? Most likely it will mean consuming as in “accessing / using the service.” And that’s… ridiculous.
Because EVERY service is likely to have at least someone under the age of 18 visit it.
After Goldman’s complaints, the California legislature did add in some clarifying language which awkwardly implies a single person under the age of 18 won’t trigger it, but that’s not at all clear, and the vagueness means everyone is at risk, and every site could be in trouble. The added language says that “likely to be accessed by children” means that “it is reasonable to expect, based on the following indicators, that the online service, product or feature would be accessed by children.” It then lists out a bunch of “indicators” that basically describe sites targeting children. But if they mean it to only apply to such sites, they should have said so explicitly, a la COPPA. Instead, the language that remains in the bill is still that “it is reasonable to expect… that the online service, product, or feature would be accessed by children.”
Let’s use Techdirt as an example. We’re not targeting kids, but I’m going to assume that some of you who visit the site are under the age of 18. Over the years, I’ve had quite a few high school students reach out to me about what I’ve written — usually based on their interest in internet rights. And that should be a good thing. I think it’s great when high schoolers take an active interest in civil liberties and the impacts of innovation — but now that’s a liability for me. We’re not targeting kids, but some may read the site. My kids might read the site because they’re interested in what their father does. Also, hell, the idea that all kids under 18 are the same and need the same level of protection is ludicrous. High schoolers should be able to read my site without difficulty, but I really don’t think elementary school kids are checking in on the latest tech policy fights or legal disputes.
Given that, it seems that, technically, Techdirt is under the auspices of this law and is now required to take all sorts of ridiculous steps to “protect” the children (though, not to actually protect anyone). After all, it is “reasonable” for me to expect that the site would be accessed by some people under the age of 18.
According to the law, I need to “estimate the age of child users with a reasonable level of certainty.” How? Am I really going to have to start age verifying every visitor to the site? It seems like I risk serious liability in not doing so. And then what? Now California has just created a fucking privacy nightmare for me. I don’t want to find out how old all of you are and then track that data. We try to collect as little data about all of you as possible, but under the law that puts me at risk.
Yes, incredibly, a bill that claims to be about protecting data, effectively demands that I collect way more personal data than I ever want to collect. And what if my age verification process is wrong? I can’t afford anything fancy. Does that violate the law? Dunno. Won’t be much fun to find out, though.
I apparently need to rewrite all of our terms, privacy policy, and community standards in “clear language suited for children.” Why? Do I need to hire a lawyer to rewrite our terms to then… run them by my children to see if they understand them? Really? Who does that help exactly (beyond the lawyers)?
But then there’s the main part of the law — the “Data Protection Impact Assessment.” This applies to every new feature. Before we can launch it, because it might be accessed by children, we need to create such a “DPIA” for every feature on the site. Our comment system? DPIA. Our comment voting? DPIA. Our comment promotion? DPIA. The ability to listen to our podcast? DPIA. The ability to share our posts? DPIA. The ability to join our insider chat? DPIA. The ability to buy a t-shirt? DPIA. The ability to post our stories to Reddit, Twitter, Facebook, or LinkedIn? DPIA (for each of those, or can we combine them? I dunno). Our feature that recommends similar articles? DPIA. Search? DPIA. Subscribe to RSS? DPIA. DPIA. DPIA DPIA. Also, every two years we have to review all DPIAs.
Fuck it. No more Techdirt posts. I’m going to be spending all my time writing DPIAs.
We’re also working on a bunch of cool new features at this moment to make the site more useful for the community. Apparently we’ll need to do a data protection impact assessment of all those too. And that’s going to make us that much less likely to want to create these new features or to improve the site.
The DPIAs are not just useless busy work, they are overly broad and introduce massive liability. The Attorney General can demand them and we have three business days to turn over all of our DPIAs.
Many of the DPIAs are crazy intrusive and raise 1st Amendment issues. The DPIA has to analyze if a child was exposed to the feature would it expose them “to harmful, or potentially harmful, content on the online product, service, or feature.” Um. I dunno. Some of our comment discussions get pretty rowdy. Is that harmful? For a child? I mean, the bill doesn’t even define “harmful” so basically… the answer is I have no fucking clue.
We also have to cover whether or not a child could witness harmful conduct. We’ve written about police brutality many times. Some of those stories have videos or images. So, um, yeah? I guess a kid could potentially witness something “harmful.”
So, now basically EVERY company with a website is going to have to have a written document where they say “yes, our service might, in some random way, enable a child to witness harmful content.” And that’s kind of ridiculous. If you don’t say that, then the state can argue you did not comply with the law and failed to do an accurate DPIA. Yet, if you do say that, how much do you want to bet that will be used against companies as a weapon? There is some language in the bill about keeping the DPIAs confidential, but especially from the big companies they’re going to leak.
I can already predict the NY Times, WSJ, Washington Post headlines screaming about how “Big Tech Company X Secretly Knew It’s Product Was Harmful!” That will be misleading as anything, because the only way to fill out a DPIA is to say “um, yes, maybe a child could possibly witness “harmful” (again, undefined in the bill!!) content on this service.” Because that’s just kind of a fact of life. A child might witness harmful content walking down the street too, but we figure out ways to deal with it.
And, you have to imagine that DPIA’s will be open for discovery and subpoenas in lawsuits, which will then be turned around on companies to insist that they had “knowledge” of the harms that could happen, and therefore they’re liable, even if the actual harm was not connected to the actual workings of the site, but the underlying content.
Part of the DPIA then is that once we’ve identified the potential harm, we have to “create a timed plan to mitigate or eliminate the risk before the online service, product, or feature is accessed by children.”
So, um, how do we mitigate the “harm” we might provide? We can’t report on police brutality any more? We can’t have comments any more? Because some undefined “child” (including high school students) out there might access it and witness “harmful” content? How is that possible?
I literally don’t know how to comply with any of this. And, doesn’t that violate the 1st Amendment? Having the government demand I document and mitigate (undefined) “harm” from my site or the content on my site seems like it’s a content moderation bill in disguise, requiring me to “mitigate” the harms (i.e., take down or ban content). And, well, that’s a 1st Amendment problem.
The enforcement of the bill is in the hands of the Attorney General. I doubt the AG is going to go after Techdirt… but, I mean, what if I write something mean about them? Now they have a tool to harass any company, demanding they hand over all their DPIAs and potentially fining them “a civil penalty of not more than two thousand five hundred dollars ($2,500) per affected child for each negligent violation or not more than seven thousand five hundred dollars ($7,500) per affected child for each intentional violation.”
So… if a class of 20 high schoolers decide to visit Techdirt to learn about their civil liberties under attack in California… the AG could then effectively fine me $150,000 for not having mitigated the “harm” they may have endured (the AG would likely have to give me 90 days to “cure” the violation, but as discussed above, there is no cure).
At the very least, this bill would make me extremely nervous about ever criticizing California’s Attorney General (especially if they seem like the vindictive type — and there are plenty of vindictive AGs in other states), because they now have an astounding weapon in their toolbox to harass any company that has a website. As such, this bill — just by existing — suppresses my speech in that it leads to us being less willing to criticize the Attorney General.
Eric Goldman keeps posting about how this blows up the internet. My guess is that it’s actually going to be almost entirely _ignored_… until it’s used to bash a company for some other issue. It’s impossible to comply with. It creates a massive amount of busy work for almost all companies with a website, almost all of which will ignore it. The biggest companies will send off their legal teams to write up a bunch of useless DPIAs (that only will create legal liability for them). More mid-sized companies may do the same, though they may also significantly decrease the kinds of features they’ll add to their websites. But every smaller company is going to just totally ignore it.
And then, any time there’s some other issue that politicians are mad about, the AG will have this stupid thing in their back pocket to slam them with. It’s performative lawmaking at its absolute worst.
And no one can explain how any of this will actually help children.
Here’s the thing that’s particularly stupid about all of this. The underlying premise of the bill is completely disconnected from reality. It’s premised on the idea that most websites don’t have any incentive to be careful with children. Are there some egregious websites out there? Sure. So write a fucking bill that targets them. Not one that wraps in everyone and demands impossible-to-comply with busy work. Or, JUST USE THE AUTHORITIES THAT ALREADY EXIST. COPPA exists. The California AG already has broad powers to protect California consumers. Use them!
If there are credible sites that are nefariously harming kids, why not use those powers, rather than forcing impossible-to-comply with busy work on absolutely everyone just in case a bunch of teenagers like the site?
The whole thing is the worst of the worst in today’s tech policymaking. It misunderstands the problem. Has no clue about what its own law will do, and just creates a massive mess. Again, I think the end result of any such law is that it is mostly ignored. And we shouldn’t be passing a law if the end result is that it’s going to be ignored and basically have everyone violate it. And that just creates a massive liability risk, because eventually, the AG is going to go after companies for this while everyone is ignoring it, and then there will be a flurry of concern.
Honestly, seeing my home state pass a law like this makes me think that California no longer wants internet businesses to be opening up here. Why would you?
But, California politicians need headlines about how they’re “taking on big tech” and “protecting the children” and so we get this utterly disconnected from reality nonsense. No one can possibly comply with it, so now the California Attorney General can target any business with a website.
That just doesn’t seem wise at all.
Filed Under: ab 2273, age appropriate design, coppa, data protection impact assessment, for the children, privacy
FTC Politely Asks Education Companies If They Would Maybe Stop Spying On Kids
Wed, May 25th 2022 06:28am - Karl Bode
If you hadn’t noticed, the U.S. doesn’t give much of a shit about this whole privacy thing. Our privacy regulators are comically and intentionally understaffed and underfunded, we still have no meaningful privacy law for the Internet era, and when regulators do act, it’s generally months after the fact with penalties that are easily laughed off by companies rich from data over-collection.
That apathy extends to kids’ privacy, of course. For years, online education software vendors have engaged in just an absurd level of data over-collection and monetization, with massive data repositories culled via everything from facial recognition to keystroke tracking and deep packet inspection.
During COVID, it became increasingly clear that many of these companies were blocking students from participating in online education if they weren’t willing to agree to extensive monitoring and monetization. In direct response, the FTC last week announced the agency had issued a new policy statement reminding these companies that COPPA still exists.
In short, the FTC warned educators and companies about collecting more data than they need, implementing some basic levels of privacy and security standards (half a million Chicago area student records were leaked due to ransomware attack recently), selling data they collect, and restricting privacy conscious student and parent access to educational software:
“Students must be able to do their schoolwork without surveillance by companies looking to harvest their data to pad their bottom line,” said Samuel Levine, Director of the FTC’s Bureau of Consumer Protection. “Parents should not have to choose between their children’s privacy and their participation in the digital classroom. The FTC will be closely monitoring this market to ensure that parents are not being forced to surrender to surveillance for their kids’ technology to turn on.”
Unlike the broader dumpster fire that is adult data collection and monetization on the Internet, kids are at least semi-protected by the sloppy mess that is COPPA, a law that was intended to protect kids’ privacy, but was so poorly written as to create all manner of unintended consequences.
The FTC policy statement notes that edu-software companies that fail to adhere to the COPPA Rule may face potential civil penalties, “and new requirements and limitations on their business practices aimed at stopping unlawful conduct.”
The problem of course is that the FTC is tasked with everything from policing accuracy in bleach labeling to protecting folks from scams. But the U.S. is an absolute scam-infested mess, and we (and by we I mean industry lobbyists who want government to be a toothless subsidy machine) have consistently ensured the FTC lacks the staff, resources, or authority to actually do its job on privacy or much of anything else.
So when it does act (which generally involves slam dunk cases and the lowest hanging fruit) the punishment can be laughed off as a minor cost of doing business. And that’s assuming litigation doesn’t wind up eliminating the fine altogether later on when the press and public aren’t paying attention.
That said it’s clear that the current FTC is actually taking this stuff seriously, which is an improvement from outright apathy of years’ past. The DOJ and FTC recently not only hit Weight Watchers with a $1.5 million fine for “illegally harvesting” personal health information of children, they required the company delete the data collected, and scrap the technology used to collect it.
Filed Under: coppa, kids, privacy
ICE Briefly Becomes A Stranded Minor: Loses Its Twitter Account For Being Too Young
from the yeah,-but-only-briefly dept
Yesterday afternoon the Twitter account of the US’s Immigrations and Customs Enforcement (ICE) briefly disappeared from the internet. Was it… anti-conservative bias? Nope. Was it ICE doing more stupid shit in locking up children and separating them from their parents? Nope. Was it ICE’s willingness to seize domain names with no evidence, claiming “counterfeit”? Nope. It was that ICE had changed the “birthday” on its account to make it so that its “age” was less than 13. Thanks to the ridiculousness of the Child Online Privacy Protection Act (COPPA), which has basically served only to have parents teach their kids it’s okay to lie online in order to use any internet service, most websites say you can’t use the service if you’re under 13 years old. ICE changed its “birthdate” to be less than 13, thereby making it… shall we say, something of a “stranded minor” and Twitter automatically, well, “separated it” from its account.
Might be nice for ICE to get a sense of how that feels.
Of course, what this really highlights is the idiocy of COPPA and how nearly every website tries to deal with its requirements. As we noted, Twitter, like many internet sites outright bars kids under 13 to avoid COPPA’s rules. Twitter does note in its forms that you need to put in your own date of birth, even if your account is “for your business, event, or even your cat.”
But… that’s bizarre. For accounts like this, whose birthday matters? Many such accounts are managed by multiple people. Whose birthday gets put in there? The answer is, of course, that a birthday is just made up. And, if you make it up, apparently you need to make up one that is older than 13 to avoid this COPPA-based “separation.”
Anyway, ICE figured stuff out and made a little joke about it:
Frankly, I’d rather they focused on actually helping asylum seekers find protection in our country rather than tossing them out, and maybe put some of that effort into reuniting the 666 kids back to the families they’ve lost track of.
Filed Under: age restrictions, birthday, coppa, ice
Companies: twitter
Bad Ideas: Raising The Arbitrary Age Of Internet Service 'Consent' To 16
from the want-to-piss-off-high-schoolers? dept
We all know various ideas for “protecting privacy online” are floating around Congress, but must all of them be so incredibly bad? Nearly all of them assume a world that doesn’t exist. Nearly all of them assume an understanding of “privacy” that is not accurate. The latest dumb idea is to expand COPPA — the Children’s Online Privacy Protection Act — that was put in place two decades ago and has been a complete joke. COPPA’s sole success is in getting everyone to think that anyone under the age of 13 isn’t supposed to be online. COPPA’s backers have admitted that they used no data in creating and have done no research into the effectiveness of the law. Indeed, actual studies have shown that COPPA’s real impact is in having parents teach their kids its okay to lie about their age online in order to access the kinds of useful services they want to use.
The “age of consent” within COPPA is 13 — and that’s why a bunch of sites claim you shouldn’t use their site if you’re under that age. Because if a site is targeting people under that age, then it has to go through extensive COPPA compliance, which most sites don’t want to do. The end result: sites say “don’t sign up if you’re under 13” and then lots of parents (and kids) lie about ages in order to let kids access those sites. It doesn’t actually protect anyone’s privacy.
So… along comes Congress and they decide the way to better protect privacy online is to raise that “age of consent” to 16.
The “Preventing Real Online Threats Endangering Children Today Act” is sponsored by Republican Rep. Tim Walberg of Michigan and Democratic Rep. Bobby Rush of Illinois.
The legislation would also require parental consent before companies can collect personal data like names, addresses and selfies from children under 16 years old. That’s up from 13 years old under the 1998 Children’s Online Privacy Protection Act.
Because we all know that teenagers are always truthful online and, dang, are they going to totally love the idea that they need their parents’ permission to use 99% of the internet. That’s really going to solve the problems now, right?
Of course not. It’s just going to teach more kids to lie about their birth dates when they sign up for internet accounts. Or, alternatively, it will overly punish the few honest kids who refuse to sign up for accounts until they’re 16. But, hey, why should Congress care about that when they’re “protecting the children.”
Filed Under: bobby rush, congress, coppa, internet, kids, lying, privacy, tim walberg