parental controls – Techdirt (original) (raw)

Judge Rejects Yet Another Attempt By Texas To Police Online Speech

from the another-one-bites-the-dust dept

Everything’s bigger in Texas, including the legislature’s willingness to pass laws that clearly violate the First Amendment rights of websites. In the last three years, this is now the third law directed at website moderation practices to be thrown out by a district court as an unconstitutional violation of the First Amendment. You’d think maybe the state’s leaders, who claim to be big First Amendment supporters, would recalibrate.

Remember, Texas was one of the earliest states to pass a law that sought to block social media from moderating content, claiming that allowing websites to have such editorial control harmed the free speech rights of citizens. Amusingly, this new law is about the same state of Texas ordering platforms to moderate other content, insisting that mandated takedowns are no violation of the First Amendment at all.

So on the one hand, the Texas state legislature thinks it can tell websites what content they can’t take down. And on the other, it thinks it can tell them what content they have to take down. It was wrong both times.

Texas HB 18 is one of a large and growing list of laws seeking to “protect the children online,” with unconstitutional restrictions. These laws are showing up in red states and blue states and everything in between. This one has a bunch of provisions which require age verification on any social media site, and then blocking of certain types of content for minors, as well as some level of parental controls.

The list of these laws and the challenges to them is growing so long that it’s easy to miss some of them. So I had seen that CCIA and NetChoice had challenged Texas HB 18 and had put it on my list of things to write up eventually. However, by the time I got around to it, we’d already had the first major decision in the case, enjoining significant parts of the law as unconstitutional under the First Amendment.

It’s a mostly good ruling by Judge Robert Pitman, who had also made an amazingly good ruling three years ago throwing out Texas’ other social media content moderation law. The Fifth Circuit then made a total mess of things, leading to the Supreme Court to just recently send the case back, noting how much of a mess the Fifth Circuit had made.

In this case, though, the ruling is a bit more of a mixed bag. It’s mostly good in that it calls out the most obviously unconstitutional bits and blocks Texas from enforcing them. But there’s more that’s maybe a little problematic, as I’ll explain at the end.

Also, this case is up against the backdrop of a third bad Texas law, the one requiring age verification for adult content websites. Last year, when that law was challenged, a different judge (Judge David Alan Ezra, who is technically based in Hawaii, but was hearing Texas cases because Texas doesn’t have enough judges) pointed out how obviously unconstitutional age verification is. Once again, the Fifth Circuit then made a mess of things, saying that it could ignore multiple precedents and that age verification was fine. The Supreme Court recently agreed to hear that case meaning that at least some part of this law (which has an age verification component) is going to need to wait until the Supreme Court sorts out the previous case.

That said, in a post-Moody world, the Supreme Court has said that any facial challenges to internet regulations must walk through every possible element of the law to determine if the whole thing needs to be thrown out. Thus, Judge Pitman walks through every last bit.

Texas AG Ken Paxton sought to block the case on a bunch of technicalities, but his efforts failed. It’s not worth going through the details here other than to note that Paxton challenged “associational standing.” This is something that Justice Clarence Thomas has been whining about lately, saying that trade associations (like CCIA and NetChoice) shouldn’t have standing to bring these challenges. However, as we’ve explained in great detail, that would be a disaster. Companies are more easy to pressure into not challenging laws, whereas trade groups have a lot more independence.

Also, we have a very long history of trade groups being told they do have standing. This would be a major change, and thankfully Pitman doesn’t take the bait.

Then we get to the main show: the First Amendment. Pitman notes that the law clearly impacts speech, and thus must pass strict scrutiny to survive. Paxton tried to claim that, based on the mess the Fifth Circuit made in the original social media law, that strict scrutiny would not apply, or at least not apply to the entire law. And Pitman responds with a “hey, did you not notice that the Supreme Court wiped out that ruling”?

In response, Paxton suggests that these arguments are foreclosed by the Fifth Circuit. (Resp., Dkt. 18, at 24 (citing NetChoice, LLC v. Paxton, 49 F.4th 439, 480 (5th Cir. 2022) (“NetChoice I”), vacated and remanded sub nom. Moody v. NetChoice, LLC, 144 S. Ct. 2383 (2024))). In NetChoice I, the Fifth Circuit rejected a similar argument brought by Plaintiffs by holding that regulations targeting social media did “not render [the law at issue] content-based because the excluded websites are fundamentally dissimilar mediums.” NetChoice I, 49 F.4th at 480.

That ruling is no longer binding because the Supreme Court vacated NetChoice I, “void[ing] each of the judgment’s holdings.” Doe v. McKesson, 71 F.4th 278, 286 (5th Cir. 2023); see also Moody, 144 S. Ct. at 2409 (vacating judgment). Paxton suggests that Moody “effectively confirmed, or at least did not disturb, the Fifth Circuit’s analysis on this point.” But the Supreme Court “disturbed” the analysis when it vacated the opinion. Paxton suggests that the Supreme Court’s own opinion “also rebuffed” Plaintiffs theory—but it is not clear how. (See Resp., Dkt. 18, at 24). To the contrary, the Supreme Court expressly stated that “there has been enough litigation already to know that the Fifth Circuit, if it had stayed the course, would get wrong at least one significant input . . . . ” Moody, 144 S. Ct. at 2349. While the Supreme Court did not determine “whether to apply strict or intermediate scrutiny[,]” that was only because “Texas’s law [did] not pass” either intermediate or strict scrutiny, at least applied to key respects of the law

Pitman points out that courts in Ohio and Mississippi had found problems with similar laws, especially when they treat different kinds of content differently. This law, like the ones in those other states, tried to narrowly mandate controls for social media and explicitly tried to carve out “news” sites, which showed that they were discriminatory.

Like the district courts in Yost and Fitch, this Court finds that HB 18 discriminates based on the type of content provided on a medium, not just the type of medium. A DSP that allows users to socially interact with other users but “primarily functions to provide” access to news or commerce is unregulated. An identical DSP, with the exact same medium of communication and method of social interaction, but “primarily functions to provide” updates on what a user’s friends and family are doing (e.g., through Instagram posts and stories), is regulated. If there is a difference between the regulated DSP and unregulated DSP, it is the content of the speech on the site, not the medium through which that speech is presented. When a site chooses not to primarily offer news but instead focus on social engagement, it changes from an uncovered to covered platform. But the type of medium has not changed, only the content primarily expressed on the platform.

In sum, strict scrutiny applies to HB 18’s provisions because the law regulates DSPs based on the content of their speech and the identity of the speaker

Because of this, Paxton will need to satisfy strict scrutiny, which means that the law is the “least restrictive means of achieving a compelling state interest.” Because of the Moody ruling, the court agrees to go provision-by-provision on this question. And thus, the “monitoring and filtering” provisions of the law fail as unconstitutional. There’s some good language in here, even though the Fifth Circuit will probably wipe it out in a few months.

These requirements force providers to develop strategies to “prevent [a] known minor’s exposure to harmful material and other content that promotes, glorifies, or facilitates: (1) suicide, self-harm, or eating disorders; (2) substance abuse; (3) stalking, bullying, or harassment; or (4) grooming, trafficking, child pornography, or other sexual exploitation or abuse.” HB 18 § 509.053. Irrespective of whether HB 18 as a whole is content-based, there can be little dispute that this provision is. The monitoring-and-filtering requirements explicitly identify discrete categories of speech and single them out to be filtered and blocked. That is as content based as it gets.

It is far from clear that Texas has a compelling interest in preventing minors’ access to every single category of information listed above. Some interests are obvious—no reasonable person could dispute that the state has a compelling interest in preventing minors from accessing information that facilitates child pornography or sexual abuse. See Sable Commc’ns of California, Inc. v. FCC, 492 U.S. 115, 126 (1989) (“[T]here is a compelling interest in protecting the physical and psychological well-being of minors.”). On the other end, many interests are not compelling, such as regulating content that might advocate for the deregulation of drugs (potentially “promoting” “substance abuse”) or defending the morality of physician-assisted suicide (likely “promoting” “suicide”). See Brown v. Ent. Merchants Ass’n, 564 U.S. 786, 794–95 (2011) (“No doubt a State possesses legitimate power to protect children from harm, but that does not include a free-floating power to restrict the ideas to which children may be exposed.”) (internal citation omitted). The Supreme Court has repeatedly emphasized that “[s]peech that is neither obscene as to youths nor subject to some other legitimate proscription cannot be suppressed solely to protect the young from ideas or images that a legislative body thinks unsuitable for them.” Erznoznik v. Jacksonville, 422 U.S. 205, 213–13 (1975). Much of the regulated topics are simply too vague to even tell if it is compelling. Terms like “promoting,” “glorifying,” “substance abuse,” “harassment,” and “grooming” are undefined, despite their potential wide breadth and politically charged nature. While these regulations may have some compelling applications, the categories are so exceedingly overbroad that such a showing is unlikely.

The judge notes that even if you could make a case that the state has a compelling interest in stopping some of these categories of content, the law is not narrowly tailored enough to meet strict scrutiny.

As in Fitch, Paxton “has not shown that the alternative suggested by [Plaintiffs], a regime of providing parents additional information or mechanisms needed to engage in active supervision over children’s internet access would be insufficient to secure the State’s objective of protecting children.” 2024 WL 3276409, at *12. By contrast, Plaintiffs have demonstrated that many DSPs do implement content-moderation policies to ensure that minors cannot access harmful content. (Mot. Prelim. Inj., Dkt. 6, at 22). And Paxton has not shown that methods such as “hash-sharing technology” and publishing depictions of filtered content are necessary to prevent harm to minors. In short, HB 18 does not employ “the least restrictive means” to stop minors from accessing harmful material. See United States v. Playboy Ent. Grp., 529 U.S. 803, 813 (2000).

HB 18 also employs overbroad terminology. Again, the monitoring-and-filtering requirements impose sweeping ex-ante speech restrictions, akin to prior restraints,12 but does little more than vaguely gesture at what speech must be restrained. For example, what does it mean for content to “promote” “grooming?” The law is not clear. So, by requiring filtering as a matter of law with only vague reference to what must be filtered, HB 18 will likely filter out far more material than needed to achieve Texas’s goal

And then there’s the problem that all these laws have. They only cover some sites that host the content Texas finds so problematic.

More problematically, the law is underinclusive. A law that “is wildly underinclusive when judged against its asserted justification . . . is alone enough to defeat it.” Brown, 564 U.S. at 802. Websites that “primarily” produce their own content are exempted, even if they host the same explicitly harmful content such as “promoting” “eating disorders” or “facilitating” “self-harm.” The most serious problem with HB 18’s under-inclusivity is it threatens to censor social discussions of controversial topics. “[S]ocial media in particular” operates as one of “most important places . . . for the exchange of views . . . .” Packingham, 582 U.S. at 104. But HB 18 specifically cuts teenagers off from this critical “democratic forum[] of the Internet” even though the same harmful content is available elsewhere. Reno v. ACLU, 521 U.S. 844, 868 (1997). A teenager can read Peter Singer advocate for physician-assisted suicide in Practical Ethics on Google Books but cannot watch his lectures on YouTube or potentially even review the same book on Goodreads. In its attempt to block children from accessing harmful content, Texas also prohibits minors from participating in the democratic exchange of views online. Even accepting that Texas only wishes to prohibit the most harmful pieces of content, a state cannot pick and choose which categories of protected speech it wishes to block teenagers from discussing online. Brown, 564 U.S. at 794–95.

Pitman also notes that some of the language of the law is so vague as to make it unconstitutional as well:

Begin with the verbs: promote, glorify, and facilitate. One of those words—“promote”—has already been held to be vague when regulating First Amendment activity. In Baggett v. Bullitt, 377 U.S. 360, 371–72 (1964), the Supreme Court dealt with a regulation that imposed a loyalty oath for teachers to swear that they will “promote respect for the flag and the institutions of the United States.” (emphasis added). The Supreme Court found that the term “promote” was “very wide indeed” and failed to “provide[] an ascertainable standard of conduct.” Id. In response, Paxton suggests that Baggett dealt with a “‘wildly different situation’ than this one.” (Resp., Dkt. 18, at 38). But, if anything, the vagueness is more problematic under HB 18, because the law requires social media DSPs to guess which broad categories of speech, likely constituting billions of posts, must be filtered from view. So the wide-ranging meanings of “promote” will result in wide-ranging censorship of speech.

The problem is even more acute with the term “glorifying.” The word encompasses so wide an ambit that people “of common intelligence” can do no more than guess at its application. McClelland, 63 F.4th at 1013. To “glorify” potentially includes any content that favorably depicts a prohibited topic, leaving no clear answer on what content must be filtered. Do liquor and beer advertisements “glorify” “substance abuse?” Does Othello “glorify” “suicide?” Given the substantial liability companies face for failing to comply (to say nothing of the private rights of action), it is reasonable to expect that companies will adopt broad definitions that do encompass such plainly protected speech.

Other parts of the law have definition problems too:

The final issue for HB 18 is that the law fails to define key categories of prohibited topics, including “grooming,” “harassment,” and “substance abuse.” At what point, for example, does alcohol use become “substance abuse?” When does an extreme diet cross the line into an “eating disorder?” What defines “grooming” and “harassment?” Under these indefinite meanings, it is easy to see how an attorney general could arbitrarily discriminate in his enforcement of the law. See Smith v. Goguen, 415 U.S. 566, 575 (“Statutory language of such a standardless sweep allows [] prosecutors[] and juries to pursue their personal predilections.”). These fears are not too distant—pro-LGBTQ content might be especially targeted for “grooming.” See Little v. Llano Cnty., No. 1:22-CV-424-RP, 2023 WL 2731089, at *2 (W.D. Tex. Mar. 30, 2023) (finding that several books supporting proLGBTQ views were removed from library shelves for allegedly promoting “grooming”), aff’d as modified, 103 F.4th 1140 (5th Cir. 2024), reh’g en banc granted, opinion vacated, 106 F.4th 426 (5th Cir. 2024). Content related to marijuana use might be prosecuted as “glorifying” “substance abuse,” even if cigarette and alcohol use is not. This vast indefinite scope of enforcement would “effectively grant[] [the State] the discretion to [assign liability] selectively on the basis of the content of the speech.” City of Houston, Tex. v. Hill, 482 U.S. 451, 465 n.15 (1987). Such a sweeping grant of censorial power cannot pass First Amendment scrutiny

The court also finds that Section 230 preempts Texas’ law. This is an issue we’ve brought up with many state laws, which the courts have mostly ignored for a few years. Section 230 is clear that it preempts any state law that attaches liability to that which Section 230 immunizes. Pitman points out that this is clearly the case with this law.

Paxton said that the 230 preemption shouldn’t apply because the law wouldn’t hold platforms liable for third-party content, but rather for just violating the law itself. Judge Pitman points out that this is not how anything works:

Imagine that Texas passed a law stating, “Social media websites must remove defamatory content.” Under Paxton’s broad reading of Free Speech Coalition, the law would not be preempted because liability attaches based on whether a website complies with the law, not based on its content. That reasoning would altogether nullify Section 230 by having the same effect as directly imposing liability on the website for hosting third-party content. Section 230 provides “broad immunity” for providers for “all claims stemming from their publication of information created by third parties.” MySpace, 528 F.3d at 418 (emphasis added). Liability under HB 18 stems from the content it hosts, even if liability directly attaches based on compliance with the law. Accordingly, the Court finds that Section 230 preempts HB 18’s monitoring and filtering requirements.

That said, there is still one part of the ruling that is problematic. The judge allows the “data privacy, parental control, and disclosure provisions” to go forward, saying that CCIA & NetChoice failed to show how those provisions violate the First Amendment.

It remains possible that each provision will fail under strict scrutiny. But that is not a given. And it is not certain to be the case under HB 18, where many provisions seem to regulate conduct and only incidentally burden speech (if at all). See Moody, 144 S. Ct. at 2402 n.4. Plaintiffs do not show how Section 509.052 places any burden on speech by prohibiting the collection of PII and geolocation data. This is primarily a regulation of conduct, so it is not clear that the law restricts or even burdens speech. Similarly, it is not clear that a law requiring parents to be allowed to access and change their children’s privacy settings implicates First Amendment concerns. Overall, these provisions likely primarily regulate conduct, and while the Court can conceive of ways in which they do burden speech (e.g., reducing the hours a child may spend consuming speech on social media), that point is not sufficiently developed at this stage.

Hopefully, this will change with more briefing, as all three of those have serious First Amendment issues associated with them. Parental controls obviously impact the First Amendment rights of children. The disclosure provisions impact issues around compelled speech of platforms, some of which were discussed in the recent Ninth Circuit ruling in the NetChoice v. Bonta decision.

But still, on the whole, this is a good ruling. Now we just need to wait for the Fifth Circuit to mess it all up.

Filed Under: 1st amendment, 5th circuit, content moderation, filtering, hb 18, ken paxton, parental controls, robert pitman, strict scrutiny, texas
Companies: ccia, netchoice

from the down-goes-another-one dept

Last month we wrote about Netchoice suing Ohio over its “Parental Notification by Social Media Act,” in which I filed a declaration highlighting how problematic the law would be for a site like Techdirt. By the time we’d finished the article about the lawsuit, a federal judge had already granted a temporary injunction, blocking the law from going into effect. The law was incredibly problematic for many reasons, but like some other laws, the whole idea was to make websites get “parental consent” for kids using social media.

As the original complaint noted, this law violated the constitution in multiple ways:

First, the Act imposes blanket parental-consent requirements for minors to access and engage in all manner of protected speech across a wide swath of websites. Courts have not hesitated to invalidate similar efforts to limit the speech by and to minors. E.g., Brown, 564 U.S. at 799 (rejecting parental-consent requirements for violent video games). Indeed, the Supreme Court has rejected the idea that the government has “the power to prevent children from hearing or saying anything without their parents’ prior consent.”….

Second, the First Amendment problems are heightened here because the Act is unconstitutionally both content-based and speaker-based and baldly discriminates among online operators based on the type of speech they publish. For example, the Act exempts “established and widely recognized media outlet[s], the primary purpose of which is to report news and current events.” Ohio Rev. Code § 1349.09(O)(2). Yet it regulates media outlets that are not “established” or “widely recognized” and mixed-purpose outlets that cover news and current events in addition to other types of media….

Third, the Act is unconstitutionally vague. Its central coverage provision applies to websites that “target[] children, or [are] reasonably anticipated to be accessed by children.” Ohio Rev. Code § 1349.09(B)(1). Websites have no way to know what this means.

And now the judge overseeing the case, Judge Algenon Marbley, has agreed. This might not have been a surprise, given the quickness of the Temporary Restraining Order, but the reasoning is laid out in more detail in granting the preliminary injunction effectively killing the law as unconstitutional.

Ohio, in its response, tried to claim that this law had nothing to do with speech, but was about “the right to contract.” This is something we’ve seen in many of these laws. The states argue “this isn’t about speech, it’s about privacy,” or “it’s about data,” or “it’s about contract,” or “it’s about safety.” None of these excuses should fly, and thankfully, they don’t here either.

Despite the “challenges of applying the Constitution to ever-advancing technology,” Brown v. Ent. Merchants Ass’n, 564 U.S. 786, 790 (2011), the First Amendment implications of the Act come into focus when social media operators are thought of as publishers of opinion work—a newspaper limited to “Letters to the Editor,” or a publisher of a series of essays by different authors. The analogy is an imperfect one—social media operators are arguably less involved in the curation of their websites’ content than these traditional examples. But the comparison helps clarify that the Act regulates speech in multiple ways: (1) it regulates operators’ ability to publish and distribute speech to minors and speech by minors; and (2) it regulates minors’ ability to both produce speech and receive speech. And as NetChoice points out, this Court is unaware of a “contract exception” to the First Amendment. Indeed, neither party references any such authority. Like many of NetChoice’s member organizations, a publisher stands to profit from engagement with consumers. That an entity seeks financial benefit from its speech does not vitiate its First Amendment rights.

There is a very long (and detailed and thoughtful) discussion regarding what standard of scrutiny should be applied to the law, which I won’t cover in detail here, but it’s a fun read if you’re into that sorta thing.

The judge picks up on something I focused on in my declaration: the law includes a carveout for “widely recognized” media outlets. I questioned (1) how one could know whether or not one was a widely recognized media outlet and (2) why only widely recognized media outlets deserved such an exception. The judge also seems to realize this is problematic.

The exceptions to the Act for product review websites and “widely recognized” media outlets, however, are easy to categorize as content based. It is noteworthy that the exceptions for media outlets and product review sites do, in part, define exempted speakers by the fact that “interaction between users is limited to” public comments. § 1349.09(O). Presumably, the public nature of comments—as opposed to private chats—reduces the predation risk to minors that Defendant argues covered operators pose. (See ECF No. 28-4 at 4). Even assuming, however, that requiring parental approval before a minor can engage in private user interaction is one of the Act’s goals—and a constitutionally sound one—the exceptions as written still distinguish between the subset of websites without private chat features based on their content. For example, a product review website is excepted, but a book or film review website, is presumably not. (ECF No. 29 at 14). The State is therefore favoring engagement with certain topics, to the exclusion of others. That is plainly a content-based exception deserving of strict scrutiny.

The court (correctly) leans on the Supreme Court’s important 2011 ruling in Brown v. Entertainment Merchants Association, which tossed out California’s law mandating video game ratings “to protect the children.” That ruling made clear that kids have First Amendment rights too.

Particularly relevant here is the Supreme Court’s analysis in Brown v. Ent. Merchs. Ass’n, which invalidated a California regulation prohibiting the sale of violent video games to minors. There, the Supreme Court reasoned that even if “the state has the power to enforce parental prohibitions”—for example, enforcing a parent’s decision to forbid their child to attend an event— “it does not follow that the state has the power to prevent children from hearing or saying anything without their parents’ prior consent.” Id. at 795 n.3. As the Court explained, “[s]uch laws do not enforce parental authority over children’s speech and religion; they impose governmental authority, subject only to a parental veto.” Id. The Act appears to be exactly that sort of law. And like content-based regulations, laws that require parental consent for children to access constitutionally protected, non-obscene content, are subject to strict scrutiny.

Having established that strict scrutiny is the right standard, the analysis is pretty straightforward. The law simply does not come close to meeting the necessary bar. The law is not narrowly tailored:

Conclusively, though, the Act is not narrowly tailored to protect minors against oppressive contracts. The Act regulates access to and dissemination of speech when it could instead seek to regulate the—arguably unconscionable—terms of service that these platforms require. The Act is also underinclusive with respect to this interest. For example, as NetChoice explains, a child can still agree to a contract with the New York Times without their parent’s consent, but not with Facebook.

[….]

Foreclosing minors under sixteen from accessing all content on websites that the Act purports to cover, absent affirmative parental consent, is a breathtakingly blunt instrument for reducing social media’s harm to children. The approach is an untargeted one, as parents must only give one-time approval for the creation of an account, and parents and platforms are otherwise not required to protect against any of the specific dangers that social media might pose. See Brown, 564 U.S. at 802 (concluding that legislation preventing minors from buying violent video games was “seriously underinclusive” because the “Legislature is perfectly willing to leave this dangerous, mind-altering material in the hands of children so long as one parent . . . says it’s OK. . . . That is not how one addresses a serious social problem.”).

And finally, with respect to the rights of parents, Attorney General Yost fails to distinguish the State’s purported interest from an analogous—and rejected—state interest in Brown. When the State of California tried a similar argument—that the legislation prohibiting minors from purchasing violent video games was “justified in aid of parental authority”—the Supreme Court noted that it doubted “punishing third parties for conveying protected speech to children just in case their parents disapprove of that speech is a proper governmental means of aiding parental authority.” Brown, 564 U.S. at 802. More conclusively, however, the Court detailed a series of preexisting protections to help parents—just as there are here—such that “filling the remaining modest gap in concerned parents’ control can hardly be a compelling state interest.” Id. at 803. And the legislation was also overinclusive, in that it enforced a governmental speech restriction, subject to parental veto, as opposed to protecting only the interests of genuinely concerned parents. Id. at 804. That is, some parents simply may not care. Id. The same is true here.

Also, the judge finds that the law is likely unconstitutionally vague, such as in the description of “widely recognized” media that concerned me so much personally in my declaration:

The Act also contains an eyebrow-raising exception for “established” and “widely recognized” media outlets whose “primary purpose” is to “report news and current events,” the speaker- and content-based flavor of which are discussed further below. § 1349.09(O)(2). But the Act also provides no guardrails or signposts for determining which media outlets are “established” and “widely recognized.” Such capacious and subjective language practically invites arbitrary application of the law.

And thus the law is not going into effect and is blocked by the preliminary injunction. As is standard practice in these cases (all of which NetChoice keeps winning at the district court level), I expect that Ohio will take the case to an appeals court.

But, maybe, just hear me out: states should stop passing obviously unconstitutional laws that attack the free speech rights of both users and websites?

Filed Under: 1st amendment, dave yost, due process, free speech, ohio, parental controls, vagueness
Companies: netchoice

Meta Joins Google In Turning Its Back On The Open Web, And Embracing Unconstitutional Mandates That Pretend To ‘Protect The Children’

from the keep-pulling-up-the-ladder dept

A month ago we wrote about Google effectively “pulling up the ladder” on the open internet by embracing age verification mandates as part of a regulatory approach to child safety. As we pointed out at the time, this is bizarre and stupid for a variety of reasons, but also not too surprising.

It’s bizarre because mandates like that have recently been found unconstitutional by multiple courts. It’s stupid because there’s little evidence that age verification does anything useful, and lots of evidence that it’s actually dangerous, because in verifying ages, services need to collect a lot of personal data which is then put at risk.

It’s not surprising because the big tech companies are now facing real competition for the first time in a while, and have learned that regulatory mandates may become their only useful moat against upstart competitors.

As we noted in that original article, while Google had been somewhat better than others, Meta had already shown a willingness to throw the open internet under the bus to appease regulators. It turned the political tide on FOSTA by supporting it wholeheartedly. It has loudly embraced support for reforming Section 230, which would give Meta a huge leg up on competitors who could no longer rely on 230’s protections.

So it should be little surprise that Meta has now come out with a similar statement to Google’s, embracing mandates on parental controls. As with FOSTA, which Sheryl Sandberg wrote about in highly emotional language, Meta’s embrace of these mandates came from the company’s “Global Head of Safety,” Antigone Davis, and again uses emotional appeals.

My daughter was 12 years old when we gave her her first phone. It wasn’t an easy decision, and I agonized over whether it was the right time. As a former teacher, advisor to a state attorney general, and now an executive at Meta — I’ve dedicated my career to protecting children online. You’d think I would be confident of the right rules and guardrails to put in place for my daughter, but I worried all the same.

If you’re focused on child safety, maybe you could start by not using your child as a prop in your political ploy?

And, look, if Meta wants to use age verification or set up parental controls, it should go for it. After all, the latest less-redacted version of the lawsuit dozens of states filed against Meta shows that the company “routinely documented” how those under 13 were using Instagram, despite being banned under the company’s terms. So, it’s a bit rich for Meta to now say that there needs to be government mandates for such technology. Why not just implement it themselves?

Again, the reality here is that Meta seems focused on pulling up that open internet ladder, and you can see it in the details of this new announcement. First of all, the demand for a mandate would mean that other, much smaller competitors would also have to implement the expensive and ineffective technology that Meta never did, limiting their ability to grow.

But even more nefarious is that Meta’s embrace of these mandates also seeks to make sure that the major part of the burden doesn’t fall on Meta, but on Apple and Google. Because it suggests the age verification and parental controls should take place in the app stores.

Parents should approve their teen’s app downloads, and we support federal legislation that requires app stores to get parents’ approval whenever their teens under 16 download apps. With this solution, when a teen wants to download an app, app stores would be required to notify their parents, much like when parents are notified if their teen attempts to make a purchase. Parents can decide if they want to approve the download. They can also verify the age of their teen when setting up their phone, negating the need for everyone to verify their age multiple times across multiple apps.

Of course, as we keep pointing out, for many kids, parents are the problem. How would this kind of system work when there is an LGBTQ child searching for information or communities where they can express themselves or learn, and they have parents who are not open minded about such things?

This is a proposal that would harm many kids. Over and over again the research shows that one of the most important parts of the internet in kids’ lives is that they can use it to find their communities and better explore their own identities.

Yes, there’s a role for parents, teaching kids how to be safe, and when to know to ask for help. But that should never mean that parents (especially of teenagers) spy on every little thing that a kid does.

Yet that’s what Meta’s proposal is suggesting.

Teaching kids to be good digital citizens means teaching them about the dangers online, how to recognize them and keep themselves safe. Meta’s proposal is inserting Google and Apple as gatekeepers and taking agency away from kids, making them less prepared for the adult world that they’ll have to deal with eventually.

It’s fundamentally an approach that undermines helping kids grow into adults, all while pretending to “protect” the kids. Protect them from real life? Protect them by making sure their parents spy on every app they use? It’s a horrible plan, and a cynical one from a company that has embraced a cynical, political approach to everything.

Yes, Meta is getting hit from all sides about child safety claims, much of it driven by moral panics. So I get the decision to come out with some sort of plan to claim to politicians that “we’re doing something, and we’re not against regulations.” But this plan is bad, it’s dangerous, it’s cynical, and it appears designed to be anti-competitive at the same time.

Filed Under: age verification, antigone davis, app stores, parental controls, protect the children
Companies: meta

New York Pushing Yet Another Unconstitutional Social Media Age Verification Bill

from the will-it-never-end? dept

It never ends with these moral-panic-driven, blatantly unconstitutional state bills “for the children.” The latest, from New York state Senator Andrew Goundardes and Assemblymember Nily Rozic was announced this week with direct support from NY Governor Kathy Hochul (who has been pushing for such unconstitutional bills for a while now, mainly to redirect attention away from her own failures as a governor).

The bills, the New York Child Data Protection Act and the Stop Addictive Feeds Exploitation (SAFE) for Kids Act (which doesn’t appear to have text live just yet), incredibly seem to be taking a page from equally censorial bills that have already been ruled unconstitutional in places like Arkansas and California. The SAFE bill is actually quite similar to a bill in Utah, which hasn’t been challenged yet, but I have to believe it will be soon, and it’s equally unconstitutional. Incredibly, the Data Protection Act itself cites the bill in Utah AND California’s Age Appropriate Design Code even though that bill has already been declared unconstitutional by a federal judge! Incredible.

When you’re introducing a bill by citing as inspiration a bill that has already been declared unconstitutional, you might just be a grandstanding fool.

Either way, this shows again how this issue isn’t a “red state” or a “blue state” issue, but politicians across the political spectrum are cynically stomping on the rights of children and adults to get headlines claiming (falsely) that they’re “protecting” the children.

As with Utah’s bill, New York’s SAFE Act will require parental consent for anyone under age 18 to have a social media account, which means that if you’re an LGBTQ+ child and your parent disapproves of your identity, they can cut you off from your community support. I understand why Republican governors like Spencer Cox might want that, by why are Democrats in New York pushing for such bills that will do such harm.

It will also require “default chronological feeds” rather than algorithmically generated feeds, even though a recent study of chronological feeds found that they expose users to more misinformation than algorithmic feeds.

So Kathy Hochul wants kids exposed to more misinfo?

As for the Data Protection Act, it will require age verification (since it says sites have to treat those under 18 differently), and, as we’ve seen with the rulings in California and Arkansas (not to mention multiple past Supreme Court rulings), that’s just blatantly unconstitutional as it ends up limiting adult access to content as well.

But it’s quite clear that the intent of this law is not about actually protecting kids, because any expert can tell you that these laws will do a great deal to harm kids. These laws are about getting the politicians pushing them positive headlines. And to that effect, it’s already working. The NY Times gave them a big old headline, without one ounce of skepticism that the bills might not actually protect kids.

Filed Under: age verification, andrew goundardes, arkansas, california, kathy hochul, new york, new york child data protection act, nily rozic, parental controls, protect the children, safe act

Nintendo Wii Doesn't Infringe On DVD Playing/Parental Control Patent

from the good-news dept

It’s nice to see a patent lawsuit go in the right direction. A judge in LA has tossed out a patent infringement lawsuit against Nintendo concerning parental controls on DVD players. The only problem? The Wii doesn’t play DVDs. Of course, Nintendo still faces a number of other patent infringement lawsuits, but at least this one was dealt with relatively quickly.

Filed Under: dvds, parental controls, patents, wii
Companies: nintendo