think of the children – Techdirt (original) (raw)

Senate To Kids: We’ll Listen To You When You Agree With Us On KOSA

from the listen-to-the-children...-not-those-kids dept

Apparently, Congress only “listens to the children” when they agree with what the kids are saying. As soon as some kids oppose something like KOSA, their views no longer count.

It’s no surprise given the way things were going, but the Senate today overwhelmingly passed KOSA by a 91 to 3 vote. The three no votes were from Senators Ron Wyden, Rand Paul, and Mike Lee.

There are still big questions about whether the House will follow suit, and, if so, how different their bill would be, and how the bills from the two chambers would be reconciled, but this is a step closer to KOSA becoming law, and creating all of the many problems people have been highlighting about it for years.

One thing I wanted to note, though, is how cynical the politicians supporting this have been. It’s become pretty typical for senators to roll out “example kids” as a kind of prop as for why they have to pass these bills. They will have stories about horrible things that happened, but with no clear explanation for how this bill would actually prevent that bad thing, and while totally ignoring the many other bad things the bill would cause.

In the case of KOSA, we’ve already highlighted how it would do harm to all sorts of information and tools that are used to help and protect kids. The most obvious example is LGBTQ+ kids, who often use the internet to help find their identity or to communicate with others who might feel isolated in their physical communities. Indeed, GOP support for KOSA was conditioned on the idea that the law would be used to suppress LGBTQ+ related content.

But, I did find it notable that, after all of the pro-KOSA team using kids as props to vote for the bill, how little attention was given last week to the ACLU sending hundreds of students to Congress to tell them how much KOSA would harm them.

Last week, the American Civil Liberties Union sent 300 high school students to Capitol Hill to lobby against the Kids Online Safety Act, a bill meant to protect children online.

The teenagers told the staffs of 85 lawmakers that the legislation could censor important conversations, particularly among marginalized groups like L.G.B.T.Q. communities.

“We live on the internet, and we are afraid that important information we’ve accessed all our lives will no longer be available,” said Anjali Verma, a 17-year-old rising high school senior from Bucks County, Pa., who was part of the student lobbying campaign. “Regardless of your political perspective, this looks like a censorship bill.”

But somehow, that perspective gets mostly ignored in all of this.

It would have been nice to have had an actual discussion on the policy challenges here, but from the beginning, KOSA co-sponsors Richard Blumenthal and Marsha Blackburn refused to take any of the concerns about the bill seriously. They frequently insisted that any criticism of the bill was just “big tech” talking points.

And, while they made cosmetic changes to try to appease some, the bill does not (and cannot) fix its fundamental problems. The bill is, fundamentally at its heart, a bill that is about censorship. And, while it does not directly demand censorship, the easiest and safest way to comply with the law will be to takedown whatever culture war hot topic politicians don’t like.

It’s kind of incredible that many of those who voted for the bill today were big supporters of the Missouri case against the administration (including Missouri’s Attorney General who brought that suit, Eric Schmitt, who voted in favor of KOSA today). So, apparently, according to Schmitt, governments should never try to influence how social media companies decide to take down content, but also government should have the power to take enforcement action against companies that don’t take down content the FTC decides is harmful.

There is a tremendous amount of hypocrisy here. And it would be nice if someone asked the senators voting in favor of this law why they were going against the wishes of all the kids who visited the Hill last week. After all, that’s what the senators who trotted out kids on the other side tried to do to those few senators who pointed out the flaws in this terrible law.

Filed Under: child safety, kids, kosa, mike lee, rand paul, ron wyden, senate, think of the children

Nebraska Sues TikTok For Claiming To Be Family Friendly

from the the-biggest-scourge-is-the-dancing-kids-app dept

Another day, another dumb lawsuit against TikTok.

We’ve seen school districts and parents suing TikTok on the basis of extremely weird claims of “kids used TikTok, some bad stuff happened to kids, TikTok should be liable.”

But in the past year, it seems that a bunch of state AGs have decided to sue TikTok as well. I’ve lost track, but in the last year or so, I believe Arkansas, Indiana, Utah, Kansas, and Iowa have all sued TikTok using some theory or another about how “kids like this app” must violate some law. It’s impossible to keep up with the state of all of these lawsuits, though I know that Indiana’s lawsuit was tossed out, though somehow Arkansas’ lawsuit has been allowed to proceed.

Perhaps buoyed by the success in Arkansas, Nebraska has also jumped into the state fray, suing TikTok for being “deceptive” in claiming the platform is family friendly. You can read the full complaint here or embedded below.

Like the other lawsuits, this one is a hodgepodge of moral panic and conspiracy thinking. It has all the hits: mental health! eating disorders! suicide! China!

But the key to the complaint is that by saying that it’s a family friendly app, while the moral panic narrative says “bad things happen to kids on TikTok,” it allows Nebraska’s AG to claim that this is a “deceptive” practice by the company:

Despite such documented knowledge, Defendants continually misrepresent their platform as safe and appropriate for kids and teens, marketing the app as “Family Friendly” and suitable for users 12 and up, reassuring parents, educators, policymakers, and others….

Of course, the end result of nonsense lawsuits like this is that no website will ever do anything in the future to try to be “family friendly” again, because if something bad then happens, AGs will go hog-wild in suing.

It’s so incredibly misdirected and short-sighted.

Like all the others, this is really a lawsuit about politicians not liking the content on TikTok, not liking that the kids these days like TikTok, and not liking that TikTok happens to be partially owned by a Chinese company.

But they can’t quite come out and say that, so they have to come up with some nonsense way of bringing these lawsuits. “Omg, we have to protect the children” appears to be one of the more popular ones, despite the near total lack of any evidence of any inherent harm in TikTok (or other social media) on kids.

Yes, some kids are suffering from mental health problems. And yes, there are some discussions on TikTok that are disturbing. But most of that disturbing content remains protected speech under the First Amendment. Just because people with mental health challenges use TikTok does not mean that TikTok magically causes those mental health challenges.

Like so many state AG cases, this whole thing is really about grandstanding by the AGs who hope to be elected the next governor or senator of their state.

This is yet another example of why KOSA is so dangerous. It empowers state AGs to sue websites in new ways. The state AGs have made it clear with most of these lawsuits that they don’t care what’s actually happening or what actually makes sense. They want to get headlines for taking on the big bad Chinese company that is poisoning our kids’ minds… so that they can get headlines and name recognition to line them up to run for higher office.

Filed Under: deceptive practices, family friendly, michael hilgers, nebraska, social media, think of the children
Companies: bytedance, tiktok

Tim Wu Asks Why Congress Keeps Failing To Protect Kids Online. The Answer Is That He’s Asking The Wrong Question

from the wrong-question,-dumb-answer dept

While I often disagree with Tim Wu, I like and respect him, and always find it interesting to know what he has to say. Wu was also one of the earliest folks to give me feedback on my Protocols not Platforms paper, when he attended a roundtable at Columbia University discussing early drafts of the papers in that series.

Yet, quite frequently, he perplexes me. The latest is that he’s written up a piece for The Atlantic wondering “Why Congress Keeps Failing to Protect Kids Online.”

The case for legislative action is overwhelming. It is insanity to imagine that platforms, which see children and teenagers as target markets, will fix these problems themselves. Teenagers often act self-assured, but their still-developing brains are bad at self-control and vulnerable to exploitation. Youth need stronger privacy protections against the collection and distribution of their personal information, which can be used for targeting. In addition, the platforms need to be pushed to do more to prevent young girls and boys from being connected to sexual predators, or served content promoting eating disorders, substance abuse, or suicide. And the sites need to hire more staff whose job it is to respond to families under attack.

Of course, let’s start there. The “case for legislative action” is not, in fact, overwhelming. Worse, the case that the internet harms kids is… far from overwhelming. The case that it helps many kids is, actually, pretty overwhelming. We keep highlighting this, but the actual evidence does not, in any way, suggest that social media and the internet are “harming” kids. It actually says the opposite.

Let’s go over this again, because so many people want to ignore the hard facts:

What most of these reports did note is that there is some evidence that for some very small percentage of the populace who are already dealing with other issues, those issues can be exacerbated by social media. Often, this is because the internet and social media become a crutch for those who are not receiving the help that they need elsewhere. However, far more common is that the internet is tremendously helpful for kids trying to figure out their own identity, to find people they feel comfortable around, and to learn about and discover the wider world.

So, already, the premise is problematic that the internet is inherently harmful for children. The data simply does not support that.

None of this means that we shouldn’t be open to ways to help those who really need it. Or that we shouldn’t be exploring better regulations for privacy protection (not just for kids, but for everyone). But this narrative that the internet is inherently harmful is simply not supported by the data, even as Wu and others seem to pretend it’s clear.

Wu does mention the horror stories he heard from some parents while he was in the White House. And those horror stories do exist. But most of those horror stories are similar to the rare, but still very real, horror stories facing kids offline as well. We should be looking for ways to deal with those rare but awful stories, but that doesn’t mean destroying all of the benefits of online services in the meantime.

And that brings us to the second problem with Wu’s setup here. He then pulls out the “something must be done, this is something, we should do this” approach to solving the problem he’s already misstated. In particular, he suggests that Congress should support KOSA:

A bolder approach to protecting children online sought to require that social-media platforms be safer for children, similar to what we require of other products that children use. In 2022 the most important such bill was the Kid’s Online Safety Act (KOSA), co-sponsored by Senators Richard Blumenthal of Connecticut and Marcia Blackburn of Tennessee. KOSA came directly out of the Frances Haugen hearings in the summer of 2021, and particularly the revelation that social-media sites were serving content that promoted eating disorders, suicide, and substance abuse to teenagers. In an alarming demonstration, Blumenthal revealed that his office had created a test Instagram account for a 13-year-old girl, which was, within one day, served content promoting eating disorders. (Instagram has acknowledged that this is an ongoing issue on its site.)

The KOSA bill would have imposed a general duty on platforms to prevent and mitigate harms to children, specifically those stemming from self-harm, suicide, addictive behaviors, and eating disorders. It would have forced platforms to install safeguards to protect children and tools to enable parental supervision. In my view, the most important thing the bill would have done was simply force the platforms to spend more money and more ongoing attention on protecting children, or risk serious liability.

But KOSA became a casualty of the great American culture war. The law would give parents more control over what their children do and see online, which was enough for some groups to transform the whole thing into a fight over transgender issues. Some on the right, unhelpfully, argued that the law should be used to protect children from trans-related content. That triggered civil-rights groups, who took up the cause of teenage privacy and speech rights. A joint letter condemned KOSA for “enabl[ing] parental supervision of minors’ use of [platforms]” and “cutting off another vital avenue of access to information for vulnerable youth.”

It got ugly. I recall an angry meeting in which the Eating Disorders Coalition (in favor of the law) fought with LGBTQ groups (opposed to it) in what felt like a very dark Veep episode, except with real lives at stake. Critics like Evan Greer, a digital-rights advocate, charged that attorneys general in red states could attempt to use the law to target platforms as part of a broader agenda against trans rights. That risk is exaggerated. The bill’s list of harms is specific and discrete; it does not include, say, “learning about transgenderism,” but it does provide that “nothing shall be construed [to require a platform to prevent] any minor from deliberately and independently searching for, or specifically requesting, content.” Nonetheless, the charge had a powerful resonance and was widely disseminated.

This whole thing is quite incredible. KOSA did not become a “casualty of the great American culture war.” Wu, astoundingly, downplays that both the Heritage Foundation and the bill’s own Republican co-author, Marsha Blackburn, directly said that the will would be helpful in censoring transgender information. For Wu to then say that the bill’s own co-author is wrong about what her own bill does is quite incredible.

He’s also wrong. While it is correct that the bill lists out six designated categories of harmful information that must be blocked, he leaves out that it’s the state Attorneys’ General who get to decide. And if you’ve paid attention to anything over the last decade, it’s that state AGs are inherently political, and are some of the most active “culture warriors” out there, quite willing to twist laws to their own interpretation to get headlines.

Also, even worse, as we’ve explained over and over again, laws like these that do require “mitigation” of “harmful content” around things like “eating disorders,” often fail to understand how that content works. As we’ve detailed, Instagram and Facebook made a big effort to block “eating disorder” content, and it backfired in a huge way. First, the issue wasn’t social media driving people to eating disorders, but people seeking out information on eating disorders (in other words, it was a demand side, not a supply side, problem).

So, when that content was removed, people with eating disorders still sought out the same content, and they still found it, either by using code words to get around the blocks or moving to darker, and even more problematic forums, where the people who ran them were way worse. And one result of this was that those forums lost the actually useful forms of mitigation, which include people talking to the kids and helping them get the help they need.

So here we have Tim Wu misunderstanding the problem, misunderstanding the solution he’s supporting (such that he’s supporting an idea already shown to make the problem worse), and he asks why Congress isn’t doing anything? Really?

It doesn’t help that there has been no political accountability for the members of Congress who were happy to grandstand about children online and then do nothing. No one outside a tiny bubble knows that Wicker voted for KOSA in public but helped kill it in private, or that infighting between Cantwell and Pallone helped kill children’s privacy. I know this only because I had to for my job. The press loves to cover members of Congress yelling at tech executives. But its coverage of the killing of popular bills is rare to nonexistent, in part because Congress hides its tracks. Say what you want about the Supreme Court or the president, but at least their big decisions are directly attributable to the justices or the chief executive. Congressmen like Frank Pallone or Roger Wicker don’t want to be known as the men who killed Congress’s efforts to protect children online, so we rarely find out who actually fired the bullet.

Notice that Wu never considers that the bills might be bad and didn’t deserve to move forward?

But Wu has continued to push this, including on exTwitter, in which he attacks those who have presented the many problematic aspects of KOSA, suggesting that they’re simply ignoring the harms. They’re not. Wu is ignoring the harms of what he supports.

Image

Again, while I often disagree with Wu, I usually find he argues in good faith. The argument in this piece, though, is just utter nonsense. Do better.

Filed Under: children, congress, duty of care, eating disorders, kosa, privacy, social media, think of the children, tim wu

Ohio Lawmaker Wants To Criminally Charge Minors Who Watch Porn to Protect Minors. What?

from the that-logic-doesn’t-track dept

In the latest chapter of my laziness writing on the crazy escapades of anti-porn Republicans for Techdirt, I wish to introduce you to Ohio state Rep. Steve Demetriou, who represents Bainbridge Township.

Rep. Demetriou introduced the Innocence Act, or House Bill (HB) 295, on October 11, 2023.

I wrote about the bill over at AVN.com and for the Cleveland Scene. Gustavo Turner of XBIZ also covered House Bill 295. Cleveland.com provided us with some local coverage of the bill.

Rep. Demetriou’s bill is the latest proposal by an anti-porn lawmaker who intends to require age verification to access an adult entertainment website, including Pornhub, Xvideos, or xHamster.

HB 295 features the same elements of the other so-called “copycat” age verification proposals inspired by Louisiana, which became the first in the United States to have a law requiring the adoption of age verification for users from local IP addresses to see adult content. The copycat bills have escalated in severity with Utah and Texas as two of the more severe cases. But it is a safe bet to say that Demetriou’s version takes the cake for the most severe age-gating bill.

According to the introduced bill text, House Bill 295 makes it a crime — a felony — for websites that fail to deploy age verification measures to check the ages of users from Ohio IP addresses.

Demetriou also proposes to make it a crime — a misdemeanor — for anyone who manages to get around an age-gate on a website through, say, a VPN or proxy. He explicitly mentions minors.

A press release announcing the bill states:

If this legislation is enacted, pornography distributors would be charged with a third-degree felony for failing to verify the age of a person accessing the adult content. If a minor attempts to access sexually explicit material by falsifying their identity, they would be charged with a fourth-degree misdemeanor.

Language in House Bill 295 confirms this:

Whoever violates…this section is guilty of failure to verify age of person accessing materials that are obscene or harmful to juveniles, a felony of the third degree.

Whoever violates…this section is guilty of use of false identifying information to access materials that are obscene or harmful to juveniles, a misdemeanor of the fourth degree.

Demetriou told Cleveland.com, the official web platform for The Plain Dealer newspaper, that this is a “common sense” approach to ensuring minors don’t circumvent an age gate. “Obviously, we’re not trying to target children with regards to criminal enforcement… but we want to make sure they’re protected,” Rep. Demetriou told Cleveland.com reporter Jeremy Pelzer. Demetriou said that the proposed criminal penalty targeting minors is a “deterrent” other than being a law that could compel prosecutors to pursue criminal charges against teenagers for being teenagers.

House Bill 295 was referred to the House Criminal Justice Committee and is awaiting markup. Rep. Demetriou did tell Cleveland.com that he is open to cleaning up the “kinks in this bill.”

For the Cleveland Scene, criminal defense attorney Corey Silverstein told me the bill is, obviously, a bad idea.

“I can’t think of a worse idea than charging minors with criminal offenses for viewing adult content and potentially ruining their futures,” he told me in my Scene report. “Attempting to shame and embarrass minors for viewing adult-themed content goes so far beyond common sense that it begs the question of whether the supporters of this bill gave it any thought at all.”

Civil liberties organizations are already alarmed at the potential implications of age verification laws in other parts of the country. For example, the American Civil Liberties Union and others filed an amicus brief at the Fifth Circuit Court of Appeals supporting plaintiffs in Free Speech Coalition v. Colmenero. Texas adopted an age verification law requiring pseudoscientific public health labeling for adult websites.

404 Media’s Sam Cole pointed this out with Vixen Media, a premium network of paysites, sharing the so-called public health messaging for Texas users. The Free Speech Coalition, an advocacy group for the adult industry, sued Texas with companies that own some of the most popular adult entertainment websites in the world. The ACLU said that the law in Texas overwhelmingly violates the First Amendment rights of adult sites and adult site users.

Attorney General Ken Paxton, having survived his impeachment, has substituted then-interim Attorney General Angela Colemenro. That case is now Free Speech Coalition et al. v. Paxton.

Michael McGrady covers the tech and legal sides of the online porn business, among other things. He is the legal and political contributing editor for AVN.com.

Filed Under: 1st amendment, age verification, criminalizing porn, hb 295, ohio, protect the children, steve demetriou, think of the children

Senator Brian Schatz Joins The Moral Panic With Unconstitutional Age Verification Bill

from the oh-stop-it dept

Senator Brian Schatz is one of the more thoughtful Senators we have, and he and his staff have actually spent time talking to lots of experts in trying to craft bills regarding the internet. Unfortunately, it still seems like he still falls under the seductive sway of this or that moral panic, so when the bills actually come out, they’re perhaps more thoughtfully done than the moral panic bills of his colleagues, but they’re still destructive.

His latest is… just bad. It appears to be modeled on a bunch of these age verification moral panic bills that we’ve seen in both red states and blue states, though Schatz’s bill is much closer to the red state version of these bills that is much more paternalistic and nonsensical.

His latest bill, with the obligatory “bipartisan” sponsor of Tom Cotton, the man who wanted to send our military into US cities to stop protests, is the “Protecting Kids On Social Media Act of 2023.”

You can read the full bill yourself, but the key parts are that it would push companies to use privacy intrusive age verification technologies, ban kids under 13 from using much of the internet, and give parents way more control and access to their kids’ internet usage.

Schatz tries to get around the obvious pitfalls with this… by basically handwaving them away. As even the French government has pointed out, there is no way to do age verification without violating privacy. There just isn’t. French data protection officials reviewed all the possibilities and said that literally none of them respect people’s privacy, and on top of that, it’s not clear that any of them are even that effective at age verification.

Schatz’s bill handwaves this away by basically saying “do age verification, but don’t do it in a way that violates privacy.” It’s like saying “jump out of a plane without a parachute, but just don’t die.” You’re asking the impossible.

I mean, clauses like this sound nice:

Nothing in this section shall be construed to require a social media platform to require users to provide government-issued identification for age verification.

But the fact that this was included kinda gives away the fact that basically every age verification system has to rely on government issued ID.

Similarly, it says that while sites should do age verification, they’re restricted from keeping any of the information as part of the process, but again, that raises all sorts of questions as to HOW you do that. This is “keep track of the location of this car, but don’t track where it is.” I mean… this is just wishful thinking.

The parental consent part is perhaps the most frustrating, and is a staple of the GOP state bills we’ve seen:

A social media platform shall take reasonable steps beyond merely requiring attestation, taking into account current parent or guardian relationship verification technologies and documentation, to require the affirmative consent of a parent or guardian to create an account for any individual who the social media platform knows or reasonably believes to be a minor according to the age verification process used by the platform.

Again, this is marginally better than the GOP bills in that it acknowledges sites need to “take into account” the current relationship, but that still leaves things open to mischief, especially as a “minor” in the bill is defined as anyone between the ages of 13 and 18, a period of time in which teens are discovering their own identities, and that often conflicts with their parents.

So, an LGBTQ child in a strict religious household with parents who refuse to accept their teens’ identity can block their kids entirely from joining certain online communities. That seems… really bad? And pretty clearly unconstitutional, because kids have rights too.

There’s also a prohibition on “algorithmic recommendation systems” for teens under 18. Of course, the bill ignores that reverse chronological order… is also an algorithm. So, effectively the law requires RANDOM content be shown to teens.

It also ignores that algorithms are useful in filtering out the kind of information that is inappropriate for kids. I get that there’s this weird, irrational hatred for the dreaded algorithms these days, but most algorithms are… actually helpful in better presenting appropriate content to both kids and adults. Removing that doesn’t seem helpful. It actually seems guaranteed to expose kids to even worse stuff, since they can’t algorithmically remove the inappropriate content any more.

Why would they want to do that?

Finally, the bill creates a “pilot program” for the Commerce Department to establish an official age verification program. While they frame this as being voluntary, come on. If you’re running a social media site and you’re required to do age verification under this bill (or other bills) are you going to use some random age verification offering out there, or the program set up by the federal government? Of course you’re going to go with the federal government’s, so if you were to ever get in trouble, you just say “well we were using the program the government came up with, so we shouldn’t face any liability for its failures.”

Just like the “verify but don’t violate people’s privacy” is handwaving, so is the “this pilot program is voluntary.”

This really is frustrating. Schatz has always seemed to be much more reasonable and open minded about this stuff, and its sad to see him fall prey to the moral panic about kids and the internet, even as the evidence suggests it’s mostly bullshit. I prefer Senators who legislate based on reality, not panic.

Filed Under: age verfication, algorithmic recommendations, brian schatz, kids and social media, moral panic, parental consent, think of the children, tom cotton

‘Lovejoy’s Law’ And Tech Moral Panics

from the helen-lovejoy-joy-killing dept

One of the central arguments for a recent rash of age verification laws across the country is to “protect the children.” Utah Gov. Spencer Cox called his signing of controversial social media laws a means for “protecting our kids from the harms of social media.” Arkansas Gov. Sarah Huckabee Sanders said in a press conference that her signing of the so-called Social Media Safety Act will help prevent the “massive negative impact on our kids.” Once he entered office, Sen. Josh Hawley, said that loot boxes in popular video games placed “a casino in the hands of every child in America.” Louisiana State Rep. Laurie Schlegel called her unconstitutional age verification bill in order to access pornography in the state a measure to counter how “pornography is destroying our children.” This all sounds the same.

“Won’t somebody please think of the children!” Do you all know who said that? The one and only Helen Lovejoy says this quote quite often in the fictional animated universe of The Simpsons. If we recall our elementary school ‘Simpsons’-ology, Helen is the morally crusading wife of the town reverend seeking to make the world a better place for the children. Or, at least, Helen making the world a better place for the children based on her worldview. Anything that pops Helen’s bubble of an ideal society for the small radioactive burg of Springfield, USA, is nothing more than a threat to the town’s morality. Helen leads grassroots campaigns to demonize and ban the things threatening her ideal, little bubble.

Some call this ‘Lovejoy’s law’ to further parody the over-the-top caricature of a socially-conservative moralist who believes that everything they disagree with is either the work of Satan or woke leftists. In criminology and sociology, this sort of individual could be referred to as a moral entrepreneur. Moral entrepreneurs are people who take the lead in developing and labeling a particular behavior or belief and spreading the label through the society at large. These individuals also lead in the construction of what they refer to as a criminally deviant or socially unacceptable behavior. These individuals also are those who organize at the grassroots level, like Mrs. Lovejoy, to establish and enforce a set of rules against behavior that these individuals define as criminally deviant or socially unacceptable. These fine folks perpetrate moral panic. Moral panic isn’t just a weird trope used by politicians and the punditry.

It’s a legitimate social phenomenon modeled by world-renowned criminologist Stanley Cohen. Cohen created a series of sequential levels to understand the role of “folk devils” as societal outsiders as labeled by the moral entrepreneurs who wish to do away with a particular class of individuals who engage in the identified behavior or action. Between the presence of mass media, moral entrepreneurs, a social control culture, and the general public, a moral panic can progress based on misinformed, disinformed, or outright false info surrounding the targets of the moral panic – or, as already mentioned above, the folk devils. Individuals like Spencer Cox, Sarah Huckabee Sanders, Josh Hawley, Laurie Schlegel – even the fictional Helen Lovejoy – qualify as “moral entrepreneurs” attempting to take down their targeted folk devils in big tech, legal porn, and free speech proponents advocating for speech they disagree with.

All of these individuals have axes to grind against the folk devils, and are effectively doing so by proposing, passing, and implementing laws that have much greater impact and are likely to have very little effect on resolving the crises these people have identified. This is a standard belief among the moral entrepreneurs. And, this isn’t the first time technology – social media, age verification, cell phones, video games, online legal pornography, and excessive internet use, for example – has seen moral panic lead to public policies and socio-legal remedies rise to the level of restricting basic civil liberties.

Let’s consider some brief historical examples of a governmental response to technological moral panic. The office of the U.S. Surgeon General released an evidence report in 1972 in response to concerns that televised violence was adverse to the public health of youth. The actual report, however, found violent television doesn’t have an adverse effect on the vast majority of youth in the country but may influence very small groups of youth who are predisposed to be potentially aggressive or they are already aggressive.

But, these groups are also influenced by a plethora of external and internal factors. Critics of the report attempted to use the Surgeon General’s findings as further evidence that violent television negatively impacts youth, despite the fact that the peer-review of the existing literature of the time said this risk impacts a very small portion of youth who are stuck with the predisposition and effects pointed out.

Decades later, concern over violence in video games also rose from moral panic. U.S. Supreme Court, of all institutions, responded to the political and legislative pressures to censor violent video games during the 1990s by declaring that there is no clear connection between adverse violence in real life settings and the playing of video games with violent depictions. In fact, the American Psychological Association issued a policy statement that told news outlets that they “should avoid stating explicitly or implicitly that criminal offenses were caused by violent media” such as violent video games. Some studies have even correlated a reduction in violent crime with the rise of violent video gaming.

Internet pornography has similar history. Internet pornography is one of the longest persisting moral panics, and the development of the web has made the moral panic more prominent. Whether we discuss the moral panic of online porn that led to the proposal of the Communications Decency Act of 1996 or the current attempts to restrict pornography “in the name of the children” at the state level, the moral entrepreneurs have the same belief guiding their motivations to eliminate the folk devils, or porn. They say that pornography itself is addictive. Or, pornography is somehow correlated to sex trafficking. Or, that pornography leads to increased instances of sexual violence and sexually related criminal offenses. But, as was the case for violence on television and in video games decades before, the opposite is true. Pornography addiction isn’t recognized by mainstream psychiatry. Incidence of sexual violence is much lower in jurisdictions where legally produced pornography is widely available. Online porn is regulated and there is little to no evidence to suggest that legally produced pornography is linked to trafficking.

All of these moral panics have led to some sort of political, legislative, or legal response where the moral entrepreneurs have lobbied their elected officials to push for policies that erode civil liberties and rights for people who are otherwise law-abiding, tax-paying, and productive members of society. Researchers Patrick M. Markey and Christopher J. Ferguson wrote on this issue for the American Journal of Play.

“Unfortunately, moral panics can be damaging,” Markey and Ferguson argue, adding that moral panics “can greatly damage the lives of individuals caught up in them.” Though they are writing on the panics related to violent video games, the commonality of the statements are clear regardless of the actual issue.

Markey and Ferguson also point out that researchers and organizations with a particular special interest or agenda have used the moral panic to conduct ethically and scientifically questionable research to just inflame the public’s fear even more. We’ve seen this with bogus studies on so-called porn addiction, internet addiction, and so much more. Now, we are starting to see this with “social media addiction.”

I wrote for the Salt Lake Tribune recently criticizing Utah’s social media bills. In the column, I discuss the claims that Gov. Cox made with regards to social media’s harms against minors.

Cox said that his office will conduct “research” into the harms of social media use among minors. In the tradition of the great Helen Lovejoy, the socially-conservative governor endorsed legislation that restricts access for minors based on a body of misguided and erroneous evidence. It’s this type of flawed research that gives moral entrepreneurs a supposed academic façade that further demonizes and damages the rights, welfare, and general wellbeing of the folk devils. Even if the folk devils are technology companies, there is this thing called the law of unintended consequences. Age-restriction laws on mainstream social media platforms can be perceived and rightfully registered as infringements on First Amendment rights for users of all ages. Social media regulations on age will harm modern socialization norms for youth.

Despite what the Helen Lovejoys of the world think, folk devils – regardless being tech companies or individuals – have rights. Restricting those rights through moral panic driven lawmaking is unethical.

Michael McGrady is a journalist and commentator focusing on the tech side of the online porn business, among other things.

Filed Under: age verification, josh hawley, laurie schlegel, lovejoy's law, moral panics, sarah huckabee sanders, spencer cox, think of the children

UK Child Welfare Charity Latest To Claim Encryption Does Nothing But Protect Criminals

from the recoil-in-terror-as-mustachioed-Mr.-Encryption-ties-another-child-to-the-train-t dept

Once again, it’s time to end encryption… for the children. That’s the message being put out by the UK’s National Society for the Prevention of Cruelty to Children (NSPCC). And that message is largely regurgitated word-for-word by Sky News:

In what it is calling the “biggest threat to children online”, the NSPCC (National Society for the Prevention of Cruelty to Children) says safeguards need to be introduced so police can access the data if needed.

“The proposals to extend end-to-end encryption on messaging platforms mean they are essentially putting a blindfold on itself” says Andy Burrows, head of the NSPCC’s child safety online policy.

“No longer will the social network be able to identify child abuse images that are being shared on its site, or grooming that’s taking place on its site.

“Because abusers know they will be able to operate with impunity, therefore it means that not only will current levels of abuse go largely undetected, but it’s highly likely that we’ll see more child abuse.”

Is it the “biggest threat?” That doesn’t seem likely, especially when others who are concerned about the welfare of children say encryption is actually good for kids.

Here’s ConnectSafely, a nonprofit headed by Larry Magid, who is on the board National Center for Missing and Exploited Children (NCMEC), which operates a clearinghouse for child porn images that helps law enforcement track down violators and rescue exploited children:

Some worry that encryption will make it harder for law enforcement to keep us safe, but I worry that a lack of encryption will make it harder for everyone, including children, to stay safe.

Phones and other digital devices can contain a great deal of personal information, including your current and previous locations, home address, your contacts, records of your calls and your texts, email messages and web searches. Such information, in the hands of a criminal, can not only lead to a violation of you or your child’s privacy, but safety as well. That’s why it’s important to have a strong passcode on your phone as well as a strong password on any cloud backup services… But even devices with strong passwords aren’t necessarily hacker proof, which is why it’s important that they be encrypted.

And here’s UNICEF, which has long been involved with protecting children around the world:

There is no equivocating that child sexual abuse can and is facilitated by the internet and that endto-end encryption of digital communication platforms appears to have significant drawbacks for the global effort to end the sexual abuse and exploitation of children. This includes making it more difficult to identify, investigate and prosecute such offences. Children have a right to be protected from sexual abuse and exploitation wherever it occurs, including online, and states have a duty to take steps to ensure effective protection and an effective response, including support to recover and justice.

At the same time, end-to-end encryption by default on Facebook Messenger and other digital communication platforms means that every single person, whether child or adult, will be provided with a technological shield against violations of their right to privacy and freedom of expression.

Despite this being far more nuanced than the NSPCC is willing to admit, it’s helping push legislation in the UK which would result in less child safety, rather than more. The Online Safety Bill would place burdens on communication services to prove they’re making every effort to prevent child exploitation. And “everything” means stripping all users of end-to-end encryption because this protective measure means no one but the sender and receiver can see their communications.

The bill would make tech companies responsible for “failing” to police child porn — with “failure” determined by “aggravating factors” like, you guessed it, offering end-to-end encrypted communications.

The following questions will help ascertain key risk aggravating factors (this is not an exhaustive list). Do your services:

This legislation — and the cheerleading from entities like the NSPCC — doesn’t really do anything but turn tech companies into handy villains that are far easier (and far more lucrative) to punish. There’s not going to be an influx of child porn just because communications are now encrypted. As critics of encryption have pointed out again and again, Facebook reports thousands of illegal images a year. So, a lack of encryption wasn’t preventing the distribution of illicit images. Adding encryption to the mix is unlikely to change anything but how much is reported by Facebook.

We all use encryption. (I mean, hopefully.) Having access to encrypted communications hasn’t nudged most people into engaging in criminal activity. That the same tech that protects innocent people is utilized by criminals doesn’t make the tech inherently evil. It’s the people that are evil.

As for law enforcement, it will still have plenty of options. Plenty of images will still be detected because lots of people are lazy or ignorant and use whatever’s convenient, rather than what’s actually secure. Encryption won’t exponentially increase the amount of illicit content circulating the internet. If the FBI (and others) can successfully seize and operate dark web child porn sites, it’s safe to say law enforcement will still find ways to arrest suspects and rescue children, even if encryption may make it slightly more difficult to do so.

Filed Under: children's charity, encryption, going dark, think of the children, uk
Companies: nspcc

Biden's Top Tech Advisor Trots Out Dangerous Ideas For 'Reforming' Section 230

from the this-is-a-problem dept

It is now broadly recognized that Joe Biden doesn’t like Section 230 and has repeatedly shown he doesn’t understand what it does. Multiple people keep insisting to me, however, that once he becomes president, his actual tech policy experts will understand the law better, and move Biden away from his nonsensical claim that he wishes to “repeal” the law.

In a move that is not very encouraging, Biden’s top tech policy advisor, Bruce Reed, along with Common Sense Media’s Jim Steyer, have published a bizarre and misleading “but think of the children!” attack on Section 230 that misunderstands the law, misunderstands how it impacts kids, and which suggests incredibly dangerous changes to Section 230. If this is the kind of policy recommendations we’re to expect over the next four years, the need to defend Section 230 is going to remain pretty much the same as it’s been over the last few years.

Let’s break down the piece and its myriad problems.

Mark Zuckerberg makes no apology for being one of the least-responsible chief executives of our time. Yet at the risk of defending the indefensible, as Zuckerberg is wont to do, we must concede that given the way federal courts have interpreted telecommunications law, some of Facebook’s highest crimes are now considered legal.

Uh, wait. No. There’s a very sketchy sleight-of-word right here in the opening, claiming that “Facebook’s highest crimes are now considered legal.” That is wrong. Any law that Facebook violates, it is still held liable for. The point of Section 230 is that Facebook (and any website) should not be held liable for any laws that its users violate. Reed and Steyer seek to elide this very important distinction in a pure “blame the messenger” way.

It may not have been against the law to livestream the massacre of 51 people at mosques in Christchurch, New Zealand or the suicide of a 12-year-old girl in the state of Georgia. Courts have cleared the company of any legal responsibility for violent attacks spawned by Facebook accounts tied to Hamas. It’s not illegal for Facebook posts to foment attacks on refugees in Europe or try to end democracy as we know it in America.

This is more of the same. The Hamas claim is particularly bogus. The lawsuit in that case involved some plaintiffs who were harmed by Hamas… and decided that the right legal remedy was to sue Facebook because some Hamas members used Facebook. There was no attempt to even show that the injuries the plaintiffs faced had anything to do with Hamas using Facebook. The cases were tossed because Section 230 did exactly the right thing: note that the legal liability should be on the parties actually responsible. We don’t blame AT&T when a terrorist makes a phone call. We don’t blame Ford because a terrorist drives a Ford car. We shouldn’t blame Facebook just because a terrorist uses Facebook.

This is fairly basic stuff, and it is shameful for Reed and Steyer to misrepresent things in such a way that is designed to obfuscate the actual details of the legal issues at play, while purely pulling at heartstrings. But the heartstring-pulling was just beginning, because this whole piece shifts into the typical “but think of the children!” pandering quite quickly.

Since Section 230 of the 1996 Communications Decency Act was passed, it has been a get-out-of-jail-free card for companies like Facebook and executives like Zuckerberg. That 26-word provision hurts our kids and is doing possibly irreparable damage to our democracy. Unless we change it, the internet will become an even more dangerous place for young people, while Facebook and other tech platforms will reap ever-greater profits from the blanket immunity that their industry enjoys.

Of course, it hasn’t been a get out of jail card for any of those companies. The law has never barred federal criminal prosecutions, as federal crimes are exempt from the statute. Almost every Section 230 case has been about civil disputes. It’s also shameful that Reed and Steyer seem to mix-up the differences between civil and criminal law.

Also, I’d contest the argument that it’s Section 230 that has made the internet a dangerous place for kids or democracy. Section 230 has enabled many, many forums and spaces for young people to congregate and communicate — many of which have been incredibly important. It’s where many LGBTQ+ kids have found like minded people to discover they’re not alone. It’s where kids who are interested in niche areas or specific communities have found others with similar views. All of that is possible because of Section 230.

Yes, there is bullying online, and that’s a problem, but Section 230 has also enabled tremendous variation and competition in how different websites respond to that, with many creating quite clever ideas in how to deal with the downsides of purely open communication. Changing Section 230 will likely remove that freedom of experimentation.

It wasn’t supposed to be this way. According to former California Rep. Chris Cox, who wrote Section 230 with Oregon’s Sen. Ron Wyden, “The original purpose of this law was to help clean up the internet, not to facilitate people doing bad things on the internet.” In the 1990s, after a New York court ruled that the online service provider Prodigy could be held liable in the same way as a newspaper publisher because it had established standards for allowable content, Cox and Wyden wrote Section 230 to protect “Good Samaritan” companies like Prodigy that tried to do the right thing by removing content that violated their guidelines.

But through subsequent court rulings, the provision has turned into a bulletproof shield for social media platforms that do little or nothing to enforce established standards.

This is just flat out wrong, and it’s embarrassing that Reed and Steyer are repeating this out and out myth. You will find no sites out there, least of all Facebook (the main bogeyman named in this article) “that do little or nothing to enforce established standards.” Facebook employs tens of thousands of content moderators, and has a truly elaborate system for reviewing and modifying its ever changing standards, which it tries to enforce.

We can agree that the companies may fail to catch everything, but that’s not because they’re not trying. It’s because it’s impossible. That was the very basis of 230: recognizing that an open platform is literally impossible to fully police, and 230 would enable sites to try different systems for policing it. What Reed and Steyer are really saying is that they don’t like how Facebook has chosen to police its platform. Which is a reasonable argument to make, but it’s not because of 230. It seems to be because Steyer and Reed are ignorant of what Facebook has actually done.

Facebook and other platforms have saved countless billions thanks to this free pass. But kids and society are paying the price. Silicon Valley has succeeded in turning the internet into an online Wild West ? nasty, brutal, and lawless ? where the innocent are most at risk.

Bullshit. Again, Facebook employs tens of thousands of moderators and actually takes a fairly heavy hand in its moderation practices. To say that this is a “Wild West” is to express near total ignorance about how content moderation actually works at Facebook. Facebook spends more on moderation that Twitter makes in revenue. To say that it’s “saving billions” thanks to this “free pass” is to basically say that you don’t know what you’re talking about.

The smartphone and the internet are revolutionary inventions, but in the absence of rules and responsibilities, they threaten the greatest invention of the modern world: a protected childhood.

This is “but think of the children” moral panicking. Yes, we should be concerned about how children use social media, but Facebook, like most other sites doesn’t allow users to have accounts if they’re under 13-years old, and the problem being discussed is not about 230, but rather about teaching children how to be more discerning digital citizens when they’re online. And this is important, because it’s a skill they’ll need to learn. Trying to shield them from absolutely everything — rather than giving them the skills to navigate it — is a dangerous approach that will leave kids unprepared for life on the internet.

But Reed and Steyer are full in on the “think of the children” moral panic… so much that they (and I only wish I was joking) compare children using social media… to child labor and child trafficking:

Since the 19th century, economic and technological progress enabled societies to ban child labor and child trafficking, eliminate deadly and debilitating childhood diseases, guarantee universal education and better safeguard young children from exposure to violence and other damaging behaviors. Technology has tremendous potential to continue that progress. But through shrewd use of the irresponsibility cloak of Section 230, some in Big Tech have turned the social media revolution into a decidedly mixed blessing.

Oh come on. Those things are not the same. This entire piece is a masterclass in extrapolating a few worst case scenarios and insisting that they’re happening much more frequently than they really are. Eventually the piece finally gets to its suggestion on “what to do about it.” And the answer is… destroy Section 230 in a way that won’t actually help.

But treating platforms as publishers doesn’t undermine the First Amendment. On the contrary, publishers have flourished under the First Amendment. They have centuries of experience in moderating content, and the free press was doing just fine until Facebook came along.

That… completely misses the point. Publishers handle things because they review every bit of content that goes out in their publication. The reason why we have 230 treat sites that host 3rd party content different than publishers who are publishing their own content is because the two things are not the same. And if websites had to review every bit of user content, like publishers do, then… we’d have many fewer spaces online where people can communicate. It would stifle speech online massively.

The tech industry’s right to do whatever it wants without consequence is its soft underbelly, not its secret sauce.

But it’s NOT a “right to do whatever it wants without consequence.” Not even remotely. The sites themselves cannot break the law. The sites have very, very strong motivations to moderate — including pressure from their own users (because if they don’t do the right thing, their users will go elsewhere), the press, and (especially) from advertisers. We’ve seen just in the past few months that advertisers pulling their ads from Facebook has been an effective tool in getting Facebook to rethink its policies.

The idea that because 230 is there, Facebook and other sites do nothing is a myth. It’s a myth that Reed and Steyer are exploiting to make you think that you have to “save the children.” It’s bullshit and they should be ashamed to peddle myths. But they lean hard into these myths:

Instead of acknowledging Facebook’s role in the 2016 election debacle, he slow-walked and covered it up. Instead of putting up real guardrails against hate speech, violence, and conspiracy videos, he has hired low-wage content moderators by the thousands as human crash dummies to monitor the flow. Without that all-purpose Section 230 shield, Facebook and other platforms would have to take responsibility for the havoc they unleash and learn to fix things, not just break them.

This is… not an accurate portrayal of anything. It’s true that Zuckerberg was initially reluctant to believe that it had a role in 2016 (and there are still legitimate questions as to how much of an impact Facebook actually had or whether it was just a convenient scapegoat for a poorly-run Hillary Clinton campaign). But by 2017, Facebook had found religion and completely revamped its moderation processes regarding election content. Yes, it did hire thousands of content moderators. But it’s bizarre that Reed and Steyer finally admit this way down in the article after paragraphs upon paragraphs insisting that Facebook does no moderation, doesn’t care, and doesn’t need to do anything.

But more to the point, if they don’t want Facebook to hire all those content moderators, but do want Facebook to stop all the bad stuff online… how the hell do they think Facebook can do that? The answer to them is the same as “wave a magic wand.” They say to take away Facebook’s 230 protections, like that will magically solve stuff. It won’t.

It would mean much greater taking down of content, including content from marginalized voices. It would mean Facebook would likely have to hire many more of those content moderators to review much more content. And, most importantly, it means that no competitor could ever be built to compete with Facebook because it would be the only company that could afford to take on such compliance costs.

And, the article gets worse. Reed and Steyer point to FOSTA as an example of how to reform 230. Really.

o the simplest way to address unlimited liability is to start limiting it. In 2018, Congress took a small step in that direction by passing the Stop Enabling Sex Traffickers Act and the Allow States and Victims to Fight Online Sex Trafficking Act. Those laws amended Section 230 to take away safe harbor protection from providers that knowingly facilitated sex trafficking.

Right, and what was the result? It certainly didn’t do what the people promoting it expected. Craigslist shut down its dating section, clearing the field for Facebook to launch its own dating site. In other words, it gave more power to Facebook.

More importantly, it has been used to harm sex workers putting many lives at risk, and shutting down places where adults could discuss sex, all while making it harder for police to find sex traffickers. The end result has actually been an increase rather than a decrease in ads for sex online.

In other words, citing FOSTA as a “good example” of how to amend Section 230 suggests whoever is citing it doesn’t know what they’re talking about.

Congress could continue to chip away by denying platform immunity for other specific wrongs like revenge porn. Better yet, it could make platform responsibility a prerequisite for any limits on liability. Boston University law professor Danielle Citron and Brookings Institution scholar Benjamin Wittes have proposed conditioning immunity on whether a platform has taken reasonable efforts to moderate content.

We’ve debunked this silly, silly proposal before. There are almost no sites that don’t do moderation. They all have “taken reasonable efforts” to moderate, except for perhaps the most extreme. Yet this whole article was about Facebook and YouTube — both of which could easily show that they’ve “taken reasonable efforts” to moderate content online.

So, if this is their suggestion… it would literally do nothing to help the “problems” they insisted were there for YouTube and Facebook. And, instead, what would happen is smaller sites would never get a chance to exist, because Facebook and YouTube would set the “standard” for how you deal with content moderation — just like how the EU has now set YouTube’s expensive ContentID as “the standard” for any site dealing with copyright-covered content.

So this proposal does nothing to change Facebook or YouTube’s policies, but locks them in as the dominant players. How is that a good idea?

But Reed and Steyer suggest maybe going further:

Washington would be better off throwing out Section 230 and starting over. The Wild West wasn’t tamed by hiring a sheriff and gathering a posse. The internet won’t be either. It will take a sweeping change in ethics and culture, enforced by providers and regulators. Instead of defaulting to shield those who most profit, the United States should shield those most vulnerable to harm, starting with kids. The “polluter pays” principle that we use to mitigate environmental damage can help achieve the same in the online environment. Simply put, platforms should be held accountable for any content that generates revenue. If they sell ads that run alongside harmful content, they should be considered complicit in the harm. Likewise, if their algorithms promote harmful content, they should be held accountable for helping redress the harm. In the long run, the only real way to moderate content is to moderate the business model.

Um. That would kill the open internet. Completely. Dead. And it’s a stupid fucking suggestion. The “pollution” they are discussing here is 1st Amendment protected speech. This is why thinking of it as analogous to pollution is so dangerous. They are advocating for government rules that will stifle free speech. Massively. And, again, the few companies that can do something are the biggest ones already. It would destroy smaller sites. And it would destroy the ability for you or me to talk online.

There’s more in the article, but it’s all bad. That this is coming from Biden’s top tech advisor is downright scary. It is as destructive as it is ignorant.

Filed Under: bruce reed, content moderation, intermediary liability, jim steyer, joe biden, liability, moral panic, responsibility, section 230, think of the children
Companies: facebook, youtube

Senators Hawley & Feinstein Join Graham & Blumenthal In Announcing Bill To Undermine Both Encryption And Section 230

from the for-the-children dept

In late January, we had an analysis of an absolutely dreadful bill proposed by Senators Lindsey Graham and Richard Blumenthal — both with a long history of attacking the internet — called the EARN IT Act. The crux of the bill was that, in the name of “protecting the children,” the bill would drastically change Section 230 of the Communications Decency Act, making companies liable for “recklessly” failing to magically stop “child sexual abuse material” — opening them up to civil lawsuits for any such failures. Even worse, it would enable the Attorney General — who has made it quite clear that he hates encryption — to effectively force companies to build in security-destroying backdoors.

On Thurdsay, the EARN IT Act (Eliminating Abusive and Rampant Neglect of Interactive Technologies Act) was officially introduced with two additional awful Senators: from the Republican side there’s tech hating Josh Hawley, and on the Democratic side, there’s encryption hating Dianne Feinstein.

This version of the bill has a few changes from the draft version that made the rounds before, but in effect it is trying to accomplish the same basic things: forcing companies to backdoor encryption or lose Section 230 protections, while at the same time opening up platforms to a wide range of lawsuits (a la what we’re seeing with FOSTA suits) from ambulance chasing tort lawyers trying to shake down internet platforms for money, while claiming to do so in the name of “protecting the children.”

Senator Ron Wyden, who authored Section 230 decades ago, had the most succinct explanation of why the EARN IT Act is bad on multiple levels:

After the federal government spent years ignoring the law and millions of reports of the most heinous crimes against children, William Barr, Lindsey Graham and Richard Blumenthal are offering a deeply flawed and counterproductive bill in response.

This terrible legislation is a Trojan horse to give Attorney General Barr and Donald Trump the power to control online speech and require government access to every aspect of Americans’ lives. It is a desperate attempt to distract from the Justice Department’s failure to request the manpower, funding and resources to combat this scourge, despite clear direction from Congress more than a decade ago.

While Section 230 does nothing to stop the federal government from prosecuting crimes, these senators claim that making it easier to sue websites is somehow going to stop pedophiles.

This bill is a transparent and deeply cynical effort by a few well-connect corporations and the Trump administration to use child sexual abuse to their political advantage, the impact to free speech and the security and privacy of every single American be damned.

There are a number of key points on this, starting with the fact that Barr’s DOJ has consistently failed to do what it’s mandated by Congress to do in fighting against child sexual exploitation. Any news story that fails to mention this key point is failing you in not explaining the context. Barr is looking for someone to blame for his own failures, and he’s picked on the politically convenient internet industry — while simultaneously getting to undermine the encryption he hates.

Another key point in the Wyden statement is that much of the EARN IT Act is dubious and cynical, but as Berin Szoka pointed out, this is likely to make stopping actual sexual exploitation that much more difficult:

?Perversely, the EARN IT Act makes it easier to sue websites than people who actually create and disseminate CSAM,? explained Sz?ka. ?Facing potentially staggering civil liability means website providers will have no choice but to comply with the Commission?s nominally voluntary ?best practices.??

In that same link, Berin highlights another Constitutional problem with the Act, which could make it much more difficult for law enforcement to track down those actually responsible for child porn — a perverse end result, which is not unlike what we’ve already seen happen with sex trafficking in response to FOSTA, where police have been saying that the law has made it more difficult for them to investigate trafficking.

This is a bad bill, put forth for cynical reasons, wrapped in a insincere “protect the children” blanket — pushed for by a crew of companies who failed to innovate on the internet, and sponsored by Senators who have a long history of making it clear that they will beat up on civil liberties and innovation at any opportunity.

Filed Under: cda 230, csam, dianne feinstein, earn it act, encryption, intermediary liability, josh hawley, lindsey graham, richard blumenthal, section 230, think of the children, william barr

Academic Consensus Growing: Phones And Social Media Aren't Damaging Your Kids

from the another-techno-panic dept

We’ve pointed out for a while now that every generation seems to have some sort of moral panic over whatever is popular among kids. You’re probably aware of more recent examples, from rock music to comic books to Dungeons and Dragons to pinball machines (really). Of course, in previous generations there were other things, like chess and the waltz. Given all that, for years we’ve urged people not to immediately jump on the bandwagon of assuming new technology must also be bad for kids. And, yet, so many people insist they are. Senator Josh Hawley has practically trademarked his claim that social media is bad for kids. Senator Lindsey Graham held a full hearing all of which was evidence free, moral panicking about social media and the children — and because of that he’s preparing a new law to completely upend Section 230 in the name of “protecting the children” from social media.

Not that it’s likely to stop grandstanding politicians, but over in academia, where they actually study these things, there’s a growing consensus that social media and smart phones aren’t actually bad for kids. While some academics made claims about potential harm a decade or so ago, none of their predictions have proven accurate, and even some of those academics have revised their earlier research, and in one case even admitting that they caused an unnecessary panic:

The debate about screen time and mental health goes back to the early days of the iPhone. In 2011, the American Academy of Pediatrics published a widely cited paper that warned doctors about ?Facebook depression.?

But by 2016, as more research came out, the academy revised that statement, deleting any mention of Facebook depression and emphasizing the conflicting evidence and the potential positive benefits of using social media.

Megan Moreno, one of the lead authors of the revised statement, said the original statement had been a problem ?because it created panic without a strong basis of evidence.?

A few different “studies of studies” are showing that there’s little to no evidence to support harm from these popular technologies.

The latest research, published on Friday by two psychology professors, combs through about 40 studies that have examined the link between social media use and both depression and anxiety among adolescents. That link, according to the professors, is small and inconsistent.

?There doesn?t seem to be an evidence base that would explain the level of panic and consternation around these issues,? said Candice L. Odgers, a professor at the University of California, Irvine, and the lead author of the paper, which was published in the Journal of Child Psychology and Psychiatry.

There’s a lot more in that NY Times article, or you can read through pretty much all of the recent academic research on the topic.

Of course, the real question is just how silly will Senators Hawley, Graham and others look as they continue to insist that social media and phones are harming the children?

Filed Under: democracy, kids, moral panic, social media, techno-panic, think of the children