richard blumenthal – Techdirt (original) (raw)

Schumer Advances KOSA: Congress’s Latest ‘But Think Of The Children’ Crusade

from the think-of-the-children...-eventually-electing-better-politicians dept

Apparently the only time Congress can get together to agree to something, it’s to give whoever is President the power to censor speech online. That’s the only conclusion I can come to regarding the widespread support for KOSA (the Kids Online Safety Act), which Senator Chuck Schumer has announced will be coming to the floor for a vote.

Our elected officials have been told again and again why KOSA is a dangerous bill that will enable targeted censorship of protected speech. They continue to push it forward and insist that it would never be abused. And, yes, the “updated” version of KOSA from earlier this year is better than earlier versions of KOSA, but it’s still a censorship bill.

The bill still retains its “duty of care” section, which the FTC can enforce. It requires websites to “exercise reasonable care” in the design of features to avoid harm. But harm remains a risk, often through no fault of any particular platform. We constantly see websites blamed for problematic decisions made by users. But users are always going to make problematic decisions, and under KOSA, whoever is in charge of the FTC can rake a company over the coals, claiming a failure to meet that duty of care.

It seems strange that Republicans, who seem to hate Lina Khan, now want to give her the power to go after Elon Musk’s ExTwitter for failing to properly protect users. But that’s what they’ll do.

On the flip side, why are Democrats giving a potential future Trump FTC the power to go after any website that is too “woke” by enabling LGBTQ content and thus failing its “duty of care” to protect the children?

Like so many powerful would-be censors, they only think about how exciting that censorship power will be in their own hands, and not in the hands of their political opponents.

Schumer is also bringing COPPA 2.0 to the floor. As we explained earlier this year, COPPA 2.0 basically takes the already problematic COPPA and makes it much worse. It might not be as inherently harmful as KOSA, but it’s still pretty harmful.

For one, this is just going to lead to more sites trying to ban teenagers from using their apps entirely, since it raises the age of restrictions from 13 to 16… and that will just mean more teens being taught to lie about their age.

Second, it effectively mandates privacy-destroying age verification by banning targeted ads to kids. But how do you know they’re kids unless you verify their ages? This idea is so short-sighted. The only way to ban “targeted” ads based on collected data is to first… collect all the same data. That seems like a real issue.

In addition, it will change the important “actual knowledge” standard for covered platforms (which is kinda necessary to keep it constitutional) to a “reasonably likely to be used” standard, meaning that even if websites make every effort to keep kids off their platform, all an enforcer needs to do is argue that they haven’t done enough because the platform was “reasonably likely to be used by” kids.

Both of these are “do something” bills. “Here’s a problem, we should do something, this is something.” They are something. They won’t help solve the problems, and are quite likely to make them worse.

But, politicians want the headlines about how they’re “protecting the children” which is exactly what the big news orgs will falsely repeat. What they should be noting is that these bills are about politicians cynically using children as props to pretend to do something.

Senators Marsha Blackburn (who said quite clearly that she wrote KOSA to “protect children from the transgender”) and Richard Blumenthal (who has made it clear that he’d just as soon kill the internet if it got him headlines) put out an obnoxious, exploitative statement about how this will save the children, when it will actually do tremendous harm to them.

Some questions remain about what will happen on the House side, as Speaker Mike Johnson has said they’ll look over whatever the Senate sends. But the existing House version of KOSA, while somewhat different than the Senate version, is equally problematic.

If you’d like to reach out to your elected officials in Congress about these bills, Fight for the Future has the StopKOSA website that includes a way to send emails. And EFF also has their own action center to contact your elected officials regarding KOSA.

Filed Under: censorship, chuck schumer, congress, coppa 2.0, duty of care, ftc, kosa, marsha blackburn, protect the children, richard blumenthal

Senator Durbin Petulantly Promises To Destroy The Open Internet If He Doesn’t Get His Bad ‘Save The Children’ Internet Bill Passed

from the must-we-do-this-again? dept

Last week, we wrote about Senator Dick Durbin going on the floor of the Senate and spreading some absolute nonsense about Section 230 as he continued to push his STOP CSAM Act. His bill has some good ideas mixed in with some absolutely terrible ideas. In particular, the current language of the bill is a direct attack on encryption (though we’re told that there are other versions floating around). The methods by which it does so is in removing Section 230, enabling people to sue websites if they “intentionally, knowingly, recklessly, or negligently” host CSAM or “promote or facilitate” child sexual exploitation.

Now, sure, go after sites that intentionally and knowingly host CSAM. That seems easy enough (and is already illegal under federal law and not blocked by Section 230). But, the fear is that using encryption could be seen as “facilitating” exploitation, and thus the offering of encrypted communications absolutely will be used as plaintiffs to file vexatious lawsuits against websites.

And rather than fixing the bill, Senator Durbin says he’ll push for a full repeal of Section 230 if Congress won’t pass his problematic bill (for what it’s worth, this is the same thing his colleague Lindsey Graham has been pushing for, and it looks like Graham has looped Durbin into this very dumb plan):

If Congress doesn’t approve kids’ online safety legislation, then it should repeal Communications Decency Act Section 230, Senate Judiciary Committee Chairman Dick Durbin, D-Ill., told us last week.

Ranking member Lindsey Graham, R-S.C., is seeking Durbin’s support for legislation… that would repeal Section 230, the tech industry’s shield against liability for hosting third-party content on platforms. Durbin told us he will see what happens with Judiciary-approved legislation on kids’ safety. “If we can’t make the changes with the bills we’ve already passed, 230 has to go,” Durbin said.

Durbin has already made it clear that he does not understand how Section 230 itself works. Last week, on the floor of the Senate, he ranted misleadingly about it while pushing for unanimous consent for STOP CSAM. He starts off with a tearjerker of a story about parents who lost children to terrible people online. But rather than blaming the terrible people, he seems to think that social media companies should wave a magic wand and magically stop bad people:

The emotion I witnessed during that hearing in the faces of survivors, parents, and family members were unforgettable. There were parents who lost their children to that little to the telephone that they were watching day in and day out.

They committed suicide at the instruction of some crazy person on the internet.

There were children there that had grown up into adults still haunted by the images that they shared with some stranger on that little telephone years and years ago.

So, first of all, as I’ve noted before, it is beyond cynical and beyond dangerous to blame someone’s death by suicide on any other person when no one knows for sure the real reason for taking that permanent, drastic step except the person who did it.

But, second, if someone is to blame, it is that “crazy person on the internet.” What Durbin leaves out is the most obvious question: was anything done to that “crazy person on the internet”?

And you think to yourself? Well, why didn’t they step up and say something? If those images are coming up on the Internet? Why don’t they do something about it? Why don’t they go to the social media site? And in many and most instances they did. And nothing happened and that’s a reason why we need this legislation.

So, a few things here: first off, his legislation is about STOP CSAM, yet he was talking about suicide. Those are… different things with different challenges? Second, the details absolutely matter here. If it is about CSAM, or even non-consensual intimate imagery (in most cases), every major platform already has a program to do so.

You can find the pages for Google, Meta, Microsoft and more to remove such content. And there are organizations like StopNCII that are very successful in removing such content as well.

If it’s actual CSAM, that’s already very illegal, and companies will remove it as soon as they find out about it. So Durbin’s claims don’t pass the sniff test, and suggest something else was going on in the situations he’s talking about, not evidence of the need for his legislation.

We say… STOP CSAM Act says, we’ll allow survivors to child online sexual exploitation to sue the tech companies that have knowingly and intentionally facilitated the exploitation.

Again, which platforms are not actually already doing that?

In other words one young woman told the story. She shared an image of herself an embarrassing image of herself that haunted her for decades afterwards. She went to the website. That was that was displaying this and told them this is something I want to take down. It is embarrassment to me. It happened when I was a little girl and still I’m living with it even today. They knew that it was on this website because this young woman and her family proved it, and yet they did nothing, nothing let him continue to play this exploitation over and over again.

Why how to get away with that they asked, and many people asked, I thought we had laws in this country protecting children what’s going on? Well, there’s a Section 230 which basically absolves these companies these media companies from responsibility for what is displayed on their websites on their social media pages. And that’s exactly what we change here.

Again, none of this makes any sense. If the imagery was actually CSAM, then that’s very much illegal and Section 230 has nothing to do with it. Durbin should then be asking why the DOJ isn’t taking action.

From the vague and non-specific description again, it sounds like this wasn’t actually CSAM, but rather simply “embarrassing” content. But “embarrassing” content is not against the law, and thus, this law still wouldn’t make any difference at all, because the content was legal.

So what situation does this law actually solve for? It’s not one involving Section 230 at all.

We say something basic and fundamental. If the social media site knowingly and intentionally continued to display these images, they’re subject to civil liability. They can be sued. Want to change this scene in a hurry? Turn the lawyers loose on them. Let them try to explain why they have no responsibility to that young woman who’s been exploited for decades. That’s what my bill works on. I’m happy to have co-sponsorship with Senator Graham and others. We believe that these bills this package of bill should come to the floor today.

Again, if it’s actually CSAM then it’s a criminal issue and the responsibility is on law enforcement. Why isn’t Durbin asking why law enforcement did nothing? Furthermore, all the major companies will report actual CSAM to NCMEC’s cybertip line, and most, if not all, of them will use some form of Microsoft’s PhotoDNA to identify repeats of the content.

So, if it’s true that this young woman had exploitative imagery being passed around, as Durbin claims, it sounds like either (1) it wasn’t actually illegal, in which case this bill would do nothing, or (2) there was a real failing of law enforcement and/or by NCMEC and PhotoDNA. It’s not at all clear how “turning the lawyers loose” for civil lawsuits fixes anything about that issue.

Again, Durbin seems to wholly misunderstand Section 230, issues related to CSAM, and how modern internet companies work. It’s not even clear from his speech that he understands the various issues. He switches at times from talk of suicide to embarrassing imagery to CSAM, without noting the fairly big differences between them all.

And now he wants to get rid of Section 230 entirely? Why?

The Communications Daily story about Durbin’s plans also has some ridiculous commentary from other senators, including Richard Blumenthal, who never misses an opportunity to be the wrongest senator about the internet.

Passing kids’ online safety legislation is more realistic than obtaining a Section 230 repeal, Senate Privacy Subcommittee Chairman Richard Blumenthal, D-Conn., told us in response to Graham’s plans. Blumenthal introduced the Kids Online Safety Act with Sen. Marsha Blackburn, R-Tenn., …“Passing a repeal of Section 230, which I strongly favor, is far more problematic than passing the Kids Online Safety Act (KOSA), which has almost two-thirds of the Senate sponsoring,” said Blumenthal. “I will support repealing Section 230, but I think the more viable path to protecting children, as a first step, is to pass the Kids Online Safety Act.”

Of course Blumenthal hates 230 and wants it repealed. He’s never understood the internet. This goes all the way back to when he was Attorney General of Connecticut. He thought that he should be able to sue Craigslist for prostitution and blamed Section 230 for not letting him do so.

There are other dumb 230 quotes from others, including Chuck Grassley and Ben Ray Lujan (who is usually better than that), but the dumbest of all goes to Senator Marco Rubio:

Section 230 immunity hinges on the question of how much tech platforms are controlling editorial discretion, Senate Intelligence Committee ranking member Marco Rubio, R-Fla., told us. “Are these people forums or are they exercising editorial controls that would make them publishers?” he said. “I think there are very strong arguments that they’re exercising editorial control.”

I know that a bunch of very silly people are convinced this is how Section 230 works, but it’s the opposite of this. The entire point of Section 230 is that it protects websites from liability for their editorial decision making. That’s it. That’s why 230 was passed. There is no “exercising editorial control” loophole that makes Section 230 not apply because the entire point of the law was to enable websites to feel free to exercise editorial control to create communities they wanted to support.

Rubio should know this, but so should the reporter for Communications Daily, Karl Herchenroeder, who wrote the above paragraph as if it was accurate, rather than completely backwards. Section 230 does not “hinge” on “how much tech platforms are controlling editorial discretion.” It hinges on “is this an interactive computer service or a user of such a service” and “is the content created by someone else.” That’s it. That’s the analysis. Editorial discretion has fuck all to do with it. And we’ve known this for decades. Anyone saying otherwise is ignorant or lying.

In the year 2024, it is beyond ridiculous that so many senators do not understand Section 230 and just keep misrepresenting it, to the point of wishing to repeal it (and with it, the open internet).

Filed Under: ben ray lujan, chuck grassley, dick durbin, lindsey graham, marco rubio, richard blumenthal, section 230, stop csam

Prominent MAGA Supporter Is Worried New KOSA Won’t Suppress Enough LGBTQ Speech

from the thanks-for-reminding-everyone-the-true-purpose-of-kosa dept

By now you know that Senator Richard Blumenthal has released a new version of KOSA, the misleadingly named Kids Online Safety Act, that he pretends fixes all the problems. It doesn’t. It still represents a real threat to speech online, and in particular speech from LGBTQ users. This is why Blumenthal, a prominent Democrat, is putting out press releases including supportive quotes from infamous anti-LGBTQ groups like the Institute for Family Studies and the “American Principles Project” (one of the leading forces behind anti-trans bills across the US). Incredibly, it also has an approving quote from NCOSE, formerly known as “Morality in Media,” a bunch of prudish busybodies who believe all pornography should be banned, and who began life trying to get “salacious” magazines banned.

When a bill is getting supportive quotes from NCOSE, an organization whose entire formation story is based around an attempt to ban books, you know that bill is not good for speech.

Why is a Democratic Senator like Blumenthal lining up with such regressive, censorial, far right nonsense peddlers? Well, because he doesn’t give a shit that KOSA is going to do real harm to LGBTQ kids or violate the Constitution he swore an oath to protect: he just wants to get a headline or two claiming he’s protecting children, with not a single care about how much damage it will actually do.

Of course, as we noted, the latest bill does make it marginally more difficult to directly suppress LGBTQ content. It removed the ability of state Attorneys General to enforce one provision, the duty of care provision, though still allows them to enforce other provisions and to sue social media companies if those state AGs feel the companies aren’t complying with the law.

Still, at least some of the MAGA crowd feel that this move, making it marginally more difficult for state AGs to try to force LGBTQ content offline means the bill is no longer worth supporting. Here’s Charlie Kirk, a leading MAGA nonsense peddler who founded and runs Turning Point USA, whining that the bill is no longer okay, since it won’t be used to silence LGBTQ folks as easily:

Image

If you can’t read that, it’s Charlie saying:

The Senate is considering the Kids Online Safety Act (KOSA), a bill that looks to protect underage children from groomers, pornographers, and other predators online.

But the bill ran into trouble because LGBT groups were worried it would make it too easy for red state AGs to target predators who try to groom children into mutilating themselves or destroying themselves with hormones and puberty blockers.

So now, the bill has been overhauled to take away power from from state AGs (since some of them might be conservatives who care about children) and instead give almost all power to the FTC, currently read by ultra-left ideologue Lina Khan. Sure enough, LGBT groups have dropped all their concerns.

We’ve seen this pattern before. What are the odds that this bill does zero to protect children but a lot to vaguely enhance the power of Washington bureaucrats to destroy whoever they want, for any reason?

If you can get past his ridiculous language, you can see that he’s (once again, like the Heritage Foundation and KOSA co-sponsor Senator Marsha Blackburn before him) admitting that the reason the MAGA crowd supports KOSA is to silence LGBTQ voices, which he falsely attacks as “groomers, pornographers, and other predators.”

He’s wrong that the bill can’t still be used for this, but he’s correct that the bill now gives tremendous power to whoever is in charge of the FTC, whether its Lina Khan… or whatever MAGA incel could be put in place if Trump wins.

Meanwhile, if Kirk is so concerned about child predators and groomers, it’s odd you never see him call out the Catholic church. Or, his former employee who was recently sentenced to years in jail for his “collection” of child sexual abuse videos. Or the organization that teamed up with Turning Point USA to sponsor an event, even though the CEO was convicted of “coercing and enticing” a minor. It’s quite interesting that Kirk is so quick to accuse LGBTQ folks of “grooming” and “predation,” when he keeps finding actual such people around himself, and he never says a word.

Either way, I’m curious if watching groups like TPUSA freak out about this bill not being censorial enough of LGBTQ content will lead Republicans to get cold feet on supporting this bill.

At the very least, though, it’s a confirmation that Republican support for this bill is based on their strong belief that it will censor and suppress LGBTQ content.

Filed Under: 1st amendment, censorship, charlie kirk, free speech, kosa, lgbtq, richard blumenthal
Companies: american principles project, ncose, talking points usa

Senator Blumenthal Pretends To Fix KOSA; It’s A Lie

from the blumenthal's-lies-will-kill dept

As lots of folks are reporting, Senator Richard Blumenthal, this morning, released an updated version of the Kids Online Safety Act (KOSA). He and co-author Senator Marsha Blackburn are also crowing how they’ve now increased the list of co-sponsors to 62 Senators, including Senators Chuck Schumer and Ted Cruz among others.

Blumenthal, as he always does, is claiming that all of the claimed problems with KOSA are myths and that there’s nothing to worry about with this bill.

He’s wrong.

He’s lying.

Senator Blumenthal has done this before. He did it with FOSTA and people died because of him. Yet, he won’t take responsibility for his bad legislation.

And this is bad legislation that will kill more people. Senator Blumenthal is using children as a prop to further his political career at the expense of actual children.

Blumenthal and his staff know this. There was talk all week that the revised bill was coming out today. Normally, senators share them around for analysis. They’ll often share a “redline” of the bill so people can analyze what’s changed. Blumenthal shared this only with his closest allies, so they could do a full court press this morning claiming the bill is perfect now while people who actually understand this shit had to spend the morning creating a redline to see what was different from the previous bill and to analyze what problems remain.

The key change that was made was to kill the part that allowed State Attorneys General to be the arbiters of enforcing what was “harmful,” which tons of people pointed out would allow Republican State AGs to claim that LGBTQ content was “harmful.” Indeed, that part was a big part of the appeal to Republicans beforehand who publicly admitted it would be used to stifle LGBTQ content.

Now, that “duty of care” section no longer applies to state AGs (who can still enforce other parts of the bill, which are still a problem). Instead, the FTC is given the power regarding this section, but as we explained a few months back, that’s still a problem, and it’s clear how that can be abused. If Donald Trump wins in the fall, and installs a new MAGA FTC boss, does anyone think this new power won’t be abused to claim that LGBTQ content is “harmful” and that companies have a “duty of care” to protect kids from it?

It also does not fully remove state AGs. They still have enforcement power over other aspects of the bill, including requiring that platforms put in place “safeguards for minors” as well as their mandated “disclosures” regarding children.

The new version of the bill also does pare back the duty of care section a bit but not in a useful way. It now is much more uncertain what websites need to do to “exercise reasonable care,” which means that sites will aggressively block content to avoid even the risk of liability.

And, of course, nothing in this bill works unless websites embrace age verification, which has already been repeatedly deemed unconstitutional, as an infringement of the rights of kids, adults, and websites. There is some other nonsense about “filter bubbles” that appears to require a chronological feed (even as research has shown chronological feeds lead people to see more false information).

Anyway, the bill is still problematic. If Blumenthal were actually trying to solve the problems of the bill he might have shared it with actual critics, rather than keeping it secret. But, the goal is not to fix it. The goal is to get Blumenthal on TV to talk about how he’s saving kids, even as he’s putting them at risk.

And Blumenthal’s “Fact v. Fiction” attempt to pre-bunk the concerns is just full of absolute nonsense. It says that KOSA doesn’t give AGs or the FTC “the power to bring lawsuits over content or speech.” But that’s misleading. As we keep seeing, people are quick to blame platforms as being responsible for “features” or “design choices” that are really about the content found via those features or design choices. It is easy to bring an enforcement action pretending to be about design, which is really about speech.

Also, the bill enables the FTC to designate what are “best practices” regarding kid safety, and what site is going to risk the liability of not following those “best practices.” And we’ve already seen the last Trump administration pressure agencies like the FCC and FTC to take on culture war issues. There’s no way it won’t happen again.

And this one really gets me. Blumenthal claims that no one should be concerned about the duty of care, while giving us all the reasons to be concerned:

The “duty of care” requires social media companies to prevent and mitigate certain harms that they know their platforms and products are causing to young users as a result of their own design choices, such as their recommendation algorithms and addictive product features. The specific covered harms include suicide, eating disorders, substance use disorders, and sexual exploitation.

For example, if an app knows that its constant reminders and nudges are causing its young users to obsessively use their platform, to the detriment of their mental health or to financially exploit them, the duty of care would allow the FTC to bring an enforcement action. This would force app developers to consider ahead of time where theses nudges are causing harms to kids, and potentially avoid using them.

“Theses [sic] nudges” indeed, Senator Finsta.

But, here’s the issue: how do you separate out things like “nudges” from the underlying content. Is it a “nudge” or is it a notification that your friend messaged you? As we’ve detailed specifically in the area of eating disorders, when sites tried to mitigate the harms by limiting access to that content it made things even worse for people, because (1) it was a demand side problem from the kids, not a supply side problem from the sites, and (2) by trying to stifle that kind of content, it took kids away from helping resources, and pushed them to riskier content.

This whole thing is based on a myth that social media is the cause of eating disorders, suicides, and other things, when the evidence simply does not support that claim at all.

The “fact vs. fiction” section is just full of fiction. For example:

No, the Kids Online Safety Act does not make online platforms liable for the content they host or choose to remove.

That’s a fun one to say, but it only makes sense if you ignore reality. Again, in this very section (as detailed above), Blumenthal is quick to conflate potential harms from content (i.e., eating disorder, suicide, etc.) with harms of design choices. Given that Blumenthal himself confuses the two, it’s rich that he thinks those things are somehow cabined off from each other within the law.

Indeed, all the FTC or a state AG is going to need to do is claim that an increase in suicides or other problems is caused by “features” on the site, and to avoid risks and liability, the pressure is going to lead the sites to remove the content, since they know damn well it’s not the features that are the concern.

And, as the eating disorder case study found, because this is a demand-side issue, kids will just find other places to continue discussing this stuff, with less oversight, and much more risk involved. People will die because of this bill.

Another lie:

No, the Kids Online Safety Act does not impose age verification requirements or require platforms to collect more data about users (government IDs or otherwise).

In fact, the bill states explicitly that it does not require age gating, age verification, or the collection of additional data from users.

This is wrong. First of all, the bill does set up a study on age verification, which is a prelude to directly requiring age verification.

But, more importantly, the bill does require “safeguards for minors” and the “duty of care” for minors, and the only way you know if you have “minors” on your site is to age verify them. And the only way to do that is to collect way more information on them, which puts their privacy at risk.

Blumenthal will claim that the bill only requires those things for users who are “known” to be minors, but then it’s going to lead sites to either put their head in the sand so they know nothing about anyone (which isn’t great) or a series of stupid legal fights over what it means to know whether or not a minor is on the site.

There’s more, but KOSA is still a mess, and because everyone’s asking my opinion of it and Blumenthal only gave early copies to friends, this is what you start with. Tragically, Blumenthal has strategically convinced a few LGBTQ groups to remove their opposition to the bill. They’re not supporting it (as some have reported), but rather their letter says the groups “will not oppose” the new KOSA.

KOSA is still a bad bill. It will not protect children. It provides no more resources to actually protect children. It is an exercise in self-aggrandizement and burnishes Blumenthal’s desire to be hailed as a “savior,” rather than looking for ways to actually solve real problems.

Filed Under: duty of care, enforcement, ftc, kosa, liability, marsha blackburn, nudges, protect the children, richard blumenthal

Snap Breaks Under Pressure, Supports Dangerous KOSA Bill That Will Put Kids In Danger

from the scabs dept

Over and over again, we see politicians browbeat companies until they agree to support terrible legislation. Back when FOSTA was being debated, there was tremendous pressure from the media and Congress for tech to support it, falsely claiming that without it they were enabling sex trafficking. Eventually, after a ton of pressure was put on the companies, Meta (then still Facebook) broke ranks with the rest of the industry and came out with full throated support for the law. Congress used that support to claim that the tech industry was on board, and passed FOSTA.

And, of course, if you read Techdirt, you know what has happened since. FOSTA has been an unmitigated disaster. It has literally put lives at risk, has created a bunch of frivolous litigation (including against Meta, the very company that helped pass the law), has been useless in stopping sex trafficking (despite the media and politicians insisting it was necessary), and if anything has likely made the problem way worse.

But, we’re seeing the same playbook being run out with KOSA, the Kids Online Safety Act, which has broad bipartisan support in Congress, even as Republicans have made it clear they view it as a tool to silence LGBTQ+ content.

There’s yet another Congressional moral panic hearing happening this week, where the CEOs of Meta, Discord, Snap, TikTok, and ExTwitter will go to DC to get yelled at by very clueless but grandstandingly angry Senators. The whole point of this dog and pony show is to pretend they’re “protecting the children” online, when it’s been shown time and time again that they don’t actually care about the harm they’re doing, or what’s really happening online.

But, because of this, all the companies are looking for ways to make some sort of public claim about how “safe” they keep kids. ExTwitter made some announcements late last week, but Snap decided to go all in and issue a Facebook-like support for KOSA.

A Snap spokesperson told POLITICO about the company’s support of Kids Online Safety Act. The popular messaging service’s position breaks ranks with its trade group NetChoice, which has opposed KOSA. The bill directs platforms to prevent the recommendation of harmful content to children, like posts on eating disorders or suicide.

Snap has been in a rough spot lately for a variety of reasons, including some very dumb lawsuits. Apparently the company feels it needs to make a splash, even if laws like KOSA will do more to put kids in danger than to help them. But, of course, they felt the need to cave to Congressional pressure. Not surprisingly, the censors-in-chief are thrilled with their first scalp.

KOSA co-sponsors Sens. Richard Blumenthal (D-Conn.) and Marsha Blackburn (R-Tenn.) applauded Snap’s endorsement. “We are pleased that at least some of these companies are seemingly joining the cause to make social media safer for kids, but this is long overdue,” they told POLITICO. “We will continue fighting to pass the Kids Online Safety Act, building on its great momentum to ensure it becomes law.”

Of course these two would cheer about this. Blackburn was the one who told a reporter how KOSA would be useful silencing “the transgender.” And Blumenthal simply hates the internet. He’s been pulling exactly this kind of shit since he was Attorney General in Connecticut and he forced Craigslist to close certain sections by fundamentally misrepresenting what Craigslist did. And that closure of parts of Craigslist has since been shown to have literally resulted in the deaths of women.

But Blumenthal has never expressed any concern or any doubt about his bills, even as he leaves a bloody trail in the wake of his legislating. KOSA will lead to much more harm as well, but its supporters have arm-twisted Snap into supporting it so that they get spared the worst of the nonsense on Wednesday.

Filed Under: ceos, grandstanding, hearing, kosa, marsha blackburn, protect the children, richard blumenthal
Companies: snap, snapchat

Blumenthal Thinks If Only The FTC Can Enforce KOSA It Won’t Be Abused; He’s Wrong

from the stop-regulating-what-you-don't-understand dept

It’s pretty amazing to me just how wrong one Senator can be about the internet for years and years and years. But we’ve been writing about Senator Richard Blumenthal and never, ever letting his own confusion about the internet get in the way of him boldly making foolish claims about the internet since before he was even Senator Blumenthal. Back in 2008, when he was simply clueless Connecticut Attorney General Richard Blumenthal, we had to try to explain to him the internet and Section 230, and sometimes I feel like his ongoing vendetta against the internet stems from looking so foolish all the way back then.

I mean, the defining moment of Blumenthal’s demands to regulate the internet remains his “will you commit to ending Finsta” demand, which only cemented just how clueless many of our elected officials are about the internet.

But, really, Blumenthal’s defining moment of internet ignorance should be his role in passing FOSTA, legislation that has been roundly recognized as (1) not even remotely doing what Blumenthal promised us it would do and (2) instead harming many people while simultaneously shutting down speech of marginalized groups.

No one should trust Senator Blumenthal around literally anything having to do with regulating the internet. He is a danger to the public.

And, of course, he’s still pushing his follow up to FOSTA, called KOSA (the Kids Online Safety Act). As with FOSTA, those who don’t understand the internet are doomed to get people killed. KOSA has all sorts of problems, in that its “duty of care” provisions will force websites to remove information that politically motivated enforcers will claim is “harmful.” Republicans have actually been quite upfront about this, publicly saying they support Blumenthal’s KOSA because they want to use it to drive LGBTQ content offline. Senator Marsha Blackburn, Blumenthal’s partner in crime on KOSA, has directly said that KOSA is needed to “protect minor children from the transgender in our culture.”

Yet Blumenthal still refuses to back down. While he agreed to some changes in the law to try to limit its scope to six designated areas of content, it’s not difficult to figure out how culture war enforcers could twist those areas to silence speech such as LGBTQ speech as “harmful” to children. We’re already seeing how Republican legislatures are doing exactly that.

The latest is that, in a paywalled article in Politico (thanks to a few of you who sent it over), Blumenthal (who denied there were any problems with the bill last year) says he’s open to changing the enforcement mechanism in the bill, potentially removing the provision that allows any state AG to enforce the law which would open it up to culture warrior AGs) and limiting it to just the FTC or possibly some other federal agency.

In the piece, Blumenthal admits that “as a former AG” himself, he would prefer to keep the AG enforcement mechanism in the bill, but he’s open to some other enforcement authority if it will get the bill over the finish line.

But, of course, this implies that the FTC is somehow not prone to abuse by whoever is in charge at the time, and wouldn’t use this new power as a political weapon. I mean, we already have Republicans constantly whining about Lina Khan’s somewhat rogue leadership and case selection at today’s FTC.

And, then, if Trump were re-elected, does anyone actually think he wouldn’t install some culture war MAGA crony to run the FTC and use it to hammer “big tech” with lawsuits? I mean, of course, he’d use KOSA — pushed by Democrats like Blumenthal — to force companies to remove pro-LGBTQ content as “harmful to kids.” How is that even in question?

Remember, this is the same Trump who tried to get the FCC to do his bidding in removing social media company’s right to moderate. That only failed when the clock ran out on his administration.

So, no, handing authority over to the FTC (or any federal agency) won’t fix the problems of KOSA. The problems of KOSA are inherent to the bill. They’re inherent to Blumenthal’s near complete ignorance of how the internet actually works, and what happens when you create these types of laws.

There are ways the government could help make the internet safer for kids. But it involves the boring, less flashy (but actually effective) things that Blumenthal will never look to do, because they don’t get him headlines or big attention-grabbing hearings.

Filed Under: ftc, kosa, richard blumenthal, state ags

Josh Hawley Back To Try To Hotline His Awful AI/Section 230 Bill

from the this-bill-is-so-bad-i'd-almost-think-an-AI-hallucinated-it dept

Last week, we wrote about the potential for Senator Josh Hawley to “hotline” the bill that he put together with Senator Richard Blumenthal to remove Section 230 from anything touching artificial intelligence. As we noted at the time, even if you hate both generative AI technology and Section 230, the bill was so poorly drafted that it would create all kinds of problems for the internet.

While there were reports that Hawley would try to rush the bill through using the “unanimous consent” hotline process (which only requires one Senator to step in and block) it was unclear last week if anyone would actually do the blocking (to be fair, it was also unclear if a companion bill would make it through the House, but you don’t want to get that far).

For whatever reason, we heard that Hawley decided to hold off until today, and there are now reports that he’ll push for the unanimous consent (basically avoiding a full vote and hoping that no one objects) today at 5:30pm ET/2:30pm PT. In other words, soon.

A very diverse group of organizations (who often don’t agree with each other on much else), including the ACLU, the Competitive Enterprise Institute, Americans for Prosperity, and the Electronic Freedom Foundation along with many others have all signed a letter put together by TechFreedom, detailing the horrors this bill would create (our own Copia Institute also signed on).

We, the undersigned organizations and individuals, write to express serious concerns about the “No Section 230 Immunity for AI Act” (S. 1993). S. 1993 would threaten freedom of expression, content moderation, and innovation. Far from targeting any clear problem, the bill takes a sweeping, overly broad approach, preempting an important public policy debate without sufficient consideration of the complexities at hand.

Section 230 makes it possible for online services to host user-generated content, by ensuring that only users are liable for what they post—not the apps and websites that host the speech. S. 1993 would undo this critical protection, exposing online services to lawsuits for content whenever the service offers or uses any AI tool that is technically capable of generating any kind of new material. The now widespread deployment of AI for content composition, recommendation, and moderation would effectively render any website or app liable for virtually all content posted to them.

As the letter notes, the bill would cut off any debate regarding the proper relationship between generative AI output and Section 230 (something that’s been quite spirited over the last year or so). It would also create a world that greatly benefited vexatious and malicious actors:

A core function of Section 230 is to provide for the early dismissal of claims and avoid the “death by ten thousand duck-bites” of costly, endless litigation. This bill provides an easy end-run around that function: simply by plausibly alleging that GenAI was somehow involved with the content at issue, plaintiffs could force services into protracted litigation in hopes of extracting a settlement for even meritless claims.

And it includes examples of possible abuse that this law would enable:

Consider a musician who utilizes a platform offering a GenAI production tool to compose a song including synthesized vocals with lyrics expressing legally harmful lies (libel) about a person. Even if the lyrics were provided wholly by the musician, the conduct underlying the ensuing libel lawsuit would undoubtedly “involve the use or provision” of GenAI—exposing the tool’s provider to litigation. In fact, the tool’s provider could lose immunity even if it did not synthesize the vocals, simply because the tool is capable of doing so.

Like any tool, GenAI can be misused by malicious actors, and there is no sure way to prevent such uses—every safeguard is ultimately circumventable. Stripping immunity from services that offer those tools irrespective of their relation to the content does not just ignore this reality, it incentivizes it. The ill-intentioned, knowing that the typically deep pockets of GenAI providers are a more attractive target to the plaintiffs’ bar, will only be further encouraged to find ways to misuse GenAI.

Still more perversely, malicious actors may find themselves immunized by the same protection that S. 1993 strips from GenAI providers. Section 230(c)(1) protects both providers of interactive computer services and users from being treated as the publisher of third-party content. But S. 1993 only excludes the former from Section 230 protection. If Section 230 does indeed protect GenAI output to at least some degree as the proponents of this bill fear, the malicious user who manipulates ChatGPT into providing a defamatory response would be immunized for re-posting that content, while OpenAI would face liability.

This is a really important point. As the bill is currently worded now, a malicious actor could deliberately use AI to try to defame someone, and they (the malicious actor) might be immune, while the generative AI tool they coaxed to write a defamatory statement would be liable. That flips basic concepts of liability on their head.

There’s a lot more in the letter. Hopefully, even those supporting this bill recognize how half-baked it is. However, in the meantime, we still have to hope that at least one Senator out there recognizes its problems as well and stops the bill from moving forward in this manner (I won’t even get into whether or not any reporter is willing to ask either Hawley or Blumenthal why they’re pushing this monstrosity, because both have made it crystal clear that the answer is because they hate the internet and relish any opportunity to break it).

Filed Under: ai, generative ai, josh hawley, liability, richard blumenthal, section 230

Even If You Hate Both AI And Section 230, You Should Be Concerned About The Hawley/Blumenthal Bill To Remove 230 Protections From AI

from the a-blumenthal/hawley-specialty dept

Over the past few days I’ve been hearing lots of buzz claiming that either today or tomorrow Senator Josh Hawley is going to push to “hotline” the bill he and Senator Richard Blumenthal introduced months back to explicitly exempt AI from Section 230. Hotlining a bill is basically an attempt to move the bill quickly by seeking unanimous consent (i.e., no one objecting) to a bill.

Let me be extremely explicit: this bill would be a danger to the internet. And that’s even if you hate both AI and Section 230. We’ve discussed this bill before, and I explained its problems then, but let’s do this again, since there’s a push to sneak it through.

First off, there remains an ongoing debate over whether or not Section 230 actually protects the output of generative AI systems. Many people say it should not, arguing that the results are from the company in question, and thus not third party speech. Lawyer Jess Miers made the (to me) extremely convincing case as to why this was wrong.

In short, the argument is that courts have already determined that algorithmic output derived from content provided by others is protected by Section 230. This has been true in cases involving things like automatically generated search snippets or things like autocomplete. And that’s kind of important or we’d lose algorithmically generated summaries of search results.

From there, you now have to somehow distinguish “generative AI output” from “algorithmically generated summaries” and there’s simply no limiting principle here. You’re just arbitrarily declaring some algorithmically generated content “AI” and some of it… not?

I remain somewhat surprised that Section 230’s authors, Ron Wyden and Chris Cox, have enthusiastically supported the claim that 230 shouldn’t protect AI output. It seems wrong on the law and wrong on the policy as noted above.

Still, Senators Hawley and Blumenthal introduced this bill that would make a mess of everything, because it’s drafted so stupidly and so poorly that it should never have been introduced, let alone be considered for moving forward.

First of all, if Wyden and Cox and those who argue 230 doesn’t apply are right, then this bill isn’t even needed in the first place, because the law already wouldn’t apply.

But, more importantly, the way the law is drafted would basically end Section 230, but in the dumbest way possible. First the bill defines generative AI extremely broadly:

GENERATIVE ARTIFICIAL INTELLIGENCE.—The term ‘generative artificial intelligence’ means an artificial intelligence system that is capable of generating novel text, video, images, audio, and other media based on prompts or other forms of data provided by a person.’

That’s the entirety of the definition. And that could apply to all sorts of technology. Does autocomplete meet that qualification? Probably. Arguably, spellchecking and grammar checking could as well.

But, again, even if you could tighten up that definition, you’d still run into problems. Because the bill’s exemption is insanely broad:

‘‘(6) NO EFFECT ON CLAIMS RELATED TO GENERATIVE ARTIFICIAL INTELLIGENCE.—Nothing in this section (other than subsection (c)(2)(A)) shall be construed to impair or limit any claim in a civil action or charge in a criminal prosecution brought under Federal or State law against the provider of an interactive computer service if the conduct underlying the claim or charge involves the use or provision of generative artificial intelligence by the interactive computer service.’’;

We need to break down the many problems with this. Note that the exemption from 230 here is not just on the output of generative AI. It’s if the conduct “involves the use or provision” of generative AI. So, if you write a post, and an AI grammar/spellchecker suggests edits, then the company is no longer protected by Section 230?

Considering that AI is currently being built into basically everything, this “exemption” will basically eat the entire law, because increasingly all content produced online will involve “the use or provision” of generative AI, even if the content itself has nothing to do with the service provider.

In short, this bill doesn’t just strip 230 protections from AI output, in effect it strips 230 from any company that offers AI in its products. Which is basically a set of internet companies rapidly approaching “all of them.” At the very least, plaintiffs will sue and claim that the content had some generative AI component just to avoid a 230 dismissal and drag the case out.

Then, because you can tell an AI-based systems to do something that violates the law, you can automatically remove all 230 protections from the company. Over at R Street, they give an example where they deliberately convince ChatGPT to defame Tony Danza.

And, under this law, doing so would open up OpenAI to liability, even though all it was doing was following the instructions of the users.

Then there’s a separate problem here. It creates a massive state law loophole. As we’ve discussed for years, for very good reasons, Section 230 preempts any state laws that would undermine it. This is to prevent states from burdening the internet with vexatious liability as a punishment (something that is increasingly popular across the political spectrum as both major political parties seek to punish companies for ideological reasons).

But, notice that this exemption deliberately carves out “state law.” That would open the floodgates to terrible state laws that introduce liability for anything related to AI, and again help to effectively strip any protections from companies that offer any product that has AI. It would enable a ton of mischief from politically motivated states.

The end result would harm a ton of internet speech, because when you add liability, you get less of the thing you add liability to. Companies would be way less open to hosting any kind of content, especially content that has any algorithmic component, as it opens them up to liability under this law.

It would also make so many tools too risky to offer. Again, this could include things as simple as spelling and grammar checkers, as such tools might strip the companies and the content from any kind of 230 protections.

I mean, you could even see scenarios like: if someone were to post a defamatory post that includes an unrelated generative AI image to Facebook, the defamed party could now sue Meta, rather than the person doing the defamation. Because the use of generative AI in the post would strip Meta of the 230 protections.

So, basically, under this law, anyone who wants to get any website in legal trouble just has to post something defamatory and include some generative AI content with it, and the company loses all 230 protections for that content. At the very least, this would lead companies to be quite concerned about allowing any content that is partially generated by AI on their sites, but it’s difficult to see how one would even police that?

Thus, really, you’re just adding liability and stripping 230 from the entire internet.

Again, even if you think AI is problematic and 230 needs major reform, this is not the way to do that. This is not a narrowly targeted piece of legislation. It’s a poorly drafted sledgehammer to the open internet, at least in the US. Section 230 was the key to the US becoming a leader in the original open internet. American companies lead the internet economy, in large part because of Section 230. As we enter the generative AI era, this law would basically be handing the next technology revolution to any other country that wants it, by adding ruinous liability to companies operating in the US.

Filed Under: generative ai, josh hawley, liability, richard blumenthal, section 230

Two Of The Absolute Worst Senators On Tech Policy Team Up To Put Together Terrible Ideas For AI Regulations

from the chatgpt-would-do-a-better-job dept

If asked to name the absolute worst Democratic and Republican Senators when it comes to technology and innovation policy, it would be difficult to come up with any worse than Richard Blumenthal from the Democratic side and Josh Hawley from the GOP side. Both have extremely long histories of having absolutely terrible, free speech destroying, privacy destroying ideas about the internet, going back to before each were in the Senate. When both of them were state Attorneys General (Blumenthal in Connecticut, Hawley in Missouri), both used baseless attacks on tech companies as a key way to get headlines and propel them into the Senate.

In the Senate, each have been even worse. Blumenthal gave us FOSTA and has been pushing a ton of other bad ideas, including the EARN IT Act, which, after denying it for a while, Blumenthal admitted is his plan to destroy encryption. Hawley, best known for raising a fist in support insurrectionists he then ran away from, has had less success actually getting bills turned into law, but he has regularly pushed nonsense bills, such as a ban on TikTok. But many of his tech bills really show that deep down inside, Hawley is really just jealous he could never make it working in tech as a product manager, so he has to use the power of the state (while decrying the power of the state) to force websites to work the way he would have designed them. Apparently, this abuse of state power to force companies to make design choices he approves of is Hawley’s way of pretending he’s masculine.

Anyway, Blumenthal and Hawley, who have never had a policy idea for tech that made any sense, think they’ve got AI regulations all figured out. Apparently… it’s an awful lot of government control, state power, and licenses. Very masculine.

The leaders of the Senate judiciary’s subcommittee for privacy, technology and law said in interviews on Thursday that their framework will include requirements for the licensing and auditing of A.I., the creation of an independent federal office to oversee the technology, liability for companies for privacy and civil rights violations, and requirements for data transparency and safety standards.

The full plan is to be revealed on Tuesday, but it’s likely that some of their framework meshes with the bill they already announced a few months ago, which would tie AI and Section 230 (which both of them hate) together, putting liability on AI companies, even for content created at the direction of users, not the companies themselves.

I get that AI tools seem big and scary, especially to folks like Hawley and Blumenthal (who both seem willing to fall for any and all moral panics around technology), but licensing, liability, and audits for a new emerging technology is exactly how you kill all domestic innovation around that technology, and hand it off to China.

This is not to deny that there are real risks associated with AI, because of course there are. But history has shown time and time again that in highly dynamic and emerging areas of innovation, no one is particularly good at accurately weighing the risks and benefits, and being overly proscriptive about that technology, often means trying to prevent a harm that is not really at risk, while limiting many of the benefits. There are better ways, involving enabling experimentation, but just being more aware and careful about the consequences of that innovation.

But, doing that doesn’t get Blumenthal and Hawley headlines. Creating a licensing board so that only approved technologies are allowed does.

Filed Under: ai, audits, josh hawley, licensing, regulatory framework, richard blumenthal

Marsha Blackburn Makes It Clear: KOSA Is Designed To Silence Trans People

from the it's-a-fucking-censorship-bill,-wake-up dept

It still amazes me that KOSA has any Democratic co-sponsors, let alone 21 Democratic co-sponsors in the Senate led by lead Democratic sponsor (and embracer of any bill that will undermine the internet it it lets him ignorantly grandstand about how terrible the internet is), Senator Richard Blumenthal. This includes some big names who purport to be more “reasonable” Senators, like Brian Schatz, Chris Murphy and more.

As we noted, Republicans haven’t been shy that their plan for KOSA is to label all sorts of content they dislike as “harmful” and make sure that KOSA bars it from social media. The Heritage Foundation made it clear that KOSA would be useful in driving LGBTQ content offline.

Of course, it’s one thing for a think tank (even one that has become the intellectual underpinnings — if there was such a thing — of Trumpism) to say this. It’s another for the bill’s lead Republican sponsor to say it.

But, she has.

As first pointed out by Erin Reed, Senator Marsha Blackburn, who co-authored the bill with Senator Blumenthal, finally admitted what other GOP Senators had been keeping quiet. That they fully intend to use KOSA in their war against trans people:

In perhaps the clearest example of red flags against this law, one of the biggest sponsors of the bill, Senator Marsha Blackburn, stated that the bill would be used to “protect minor children from the transgender [sic] in our culture.”

She says it in this video, which actually goes further than the quote from Reed:

The full transcript is way worse. When asked by the “Family Policy Alliance” about the “top issue” that “conservatives” should be “taking action” on, Blackburn starts out with the line above, before talking about how social media is “exposing” kids to such content, suggesting that it’s turning the kids trans (which, um, is not how any of this works).

Well, there are a couple of things, of course, protecting minor children from the transgender and this culture and that influence. And I would add to that watching what’s happening on social media.

And I’ve got the kids online safety act that I think we’re going to end up getting through, probably this summer. This would put a duty of care and responsibility on the social media platforms.

And this is where children are being indoctrinated. They’re hearing things at school and then they’re getting onto YouTube to watch a video. And all of a sudden this comes to them, um, and they’re on Snapchat, or they’re on Instagram and they click on something and the next thing you know, they’re being inundated with it.

Parents need to be watching this. Teachers need to be watching and protecting our children and making certain that they are not exposed to things that they are emotionally not mature enough to handle.

The whole statement is a bit of a wandering word salad without complete thoughts, but it seems pretty clear that Senator Marsha Blackburn believes that YouTube, SnapChat, and Instagram are turning the kids trans, and that she’s hoping KOSA will be useful in “protecting children” from that.

While most of the media ignored it when folks pointed out the Heritage Foundation saying flat out how KOSA would be used, at least some in the media are finally picking up on this now that Blackburn is being so upfront. NBC News had an article highlighting Blackburn’s comments, as did Mashable.

But will any of the mainstream media, maybe, ask Democratic Senators like Blumenthal and Schatz how they feel about this? Will they ask President Biden, who recently scolded Congress to “pass it, pass it, pass it” when asked about KOSA?

How hard is it for the NY Times or the Washington Post or CNN or whoever else to ask this question: “The sponsor of KOSA says that the bill will be used to ‘protect’ children from ‘the transgender.’ How can you support a bill that will be used to censor and harm transgender individuals?”

While we’re at it, can someone ask that of other supporters of the bill, like Dove Soap and Lizzo? Or how about Common Sense Media, who says they “strongly support” KOSA? I didn’t realize that Common Sense Media is anti-trans. Or that Senators Blumenthal, Schatz, Murphy, Durbin and others are. But if they’re supporting this bill, their actions make it clear that they have no problem passing bills that will be used to harm trans people.

Filed Under: kosa, lgbtq, marsha blackburn, richard blumenthal, transgender