dick durbin – Techdirt (original) (raw)
Senator Durbin Petulantly Promises To Destroy The Open Internet If He Doesn’t Get His Bad ‘Save The Children’ Internet Bill Passed
from the must-we-do-this-again? dept
Last week, we wrote about Senator Dick Durbin going on the floor of the Senate and spreading some absolute nonsense about Section 230 as he continued to push his STOP CSAM Act. His bill has some good ideas mixed in with some absolutely terrible ideas. In particular, the current language of the bill is a direct attack on encryption (though we’re told that there are other versions floating around). The methods by which it does so is in removing Section 230, enabling people to sue websites if they “intentionally, knowingly, recklessly, or negligently” host CSAM or “promote or facilitate” child sexual exploitation.
Now, sure, go after sites that intentionally and knowingly host CSAM. That seems easy enough (and is already illegal under federal law and not blocked by Section 230). But, the fear is that using encryption could be seen as “facilitating” exploitation, and thus the offering of encrypted communications absolutely will be used as plaintiffs to file vexatious lawsuits against websites.
And rather than fixing the bill, Senator Durbin says he’ll push for a full repeal of Section 230 if Congress won’t pass his problematic bill (for what it’s worth, this is the same thing his colleague Lindsey Graham has been pushing for, and it looks like Graham has looped Durbin into this very dumb plan):
If Congress doesn’t approve kids’ online safety legislation, then it should repeal Communications Decency Act Section 230, Senate Judiciary Committee Chairman Dick Durbin, D-Ill., told us last week.
Ranking member Lindsey Graham, R-S.C., is seeking Durbin’s support for legislation… that would repeal Section 230, the tech industry’s shield against liability for hosting third-party content on platforms. Durbin told us he will see what happens with Judiciary-approved legislation on kids’ safety. “If we can’t make the changes with the bills we’ve already passed, 230 has to go,” Durbin said.
Durbin has already made it clear that he does not understand how Section 230 itself works. Last week, on the floor of the Senate, he ranted misleadingly about it while pushing for unanimous consent for STOP CSAM. He starts off with a tearjerker of a story about parents who lost children to terrible people online. But rather than blaming the terrible people, he seems to think that social media companies should wave a magic wand and magically stop bad people:
The emotion I witnessed during that hearing in the faces of survivors, parents, and family members were unforgettable. There were parents who lost their children to that little to the telephone that they were watching day in and day out.
They committed suicide at the instruction of some crazy person on the internet.
There were children there that had grown up into adults still haunted by the images that they shared with some stranger on that little telephone years and years ago.
So, first of all, as I’ve noted before, it is beyond cynical and beyond dangerous to blame someone’s death by suicide on any other person when no one knows for sure the real reason for taking that permanent, drastic step except the person who did it.
But, second, if someone is to blame, it is that “crazy person on the internet.” What Durbin leaves out is the most obvious question: was anything done to that “crazy person on the internet”?
And you think to yourself? Well, why didn’t they step up and say something? If those images are coming up on the Internet? Why don’t they do something about it? Why don’t they go to the social media site? And in many and most instances they did. And nothing happened and that’s a reason why we need this legislation.
So, a few things here: first off, his legislation is about STOP CSAM, yet he was talking about suicide. Those are… different things with different challenges? Second, the details absolutely matter here. If it is about CSAM, or even non-consensual intimate imagery (in most cases), every major platform already has a program to do so.
You can find the pages for Google, Meta, Microsoft and more to remove such content. And there are organizations like StopNCII that are very successful in removing such content as well.
If it’s actual CSAM, that’s already very illegal, and companies will remove it as soon as they find out about it. So Durbin’s claims don’t pass the sniff test, and suggest something else was going on in the situations he’s talking about, not evidence of the need for his legislation.
We say… STOP CSAM Act says, we’ll allow survivors to child online sexual exploitation to sue the tech companies that have knowingly and intentionally facilitated the exploitation.
Again, which platforms are not actually already doing that?
In other words one young woman told the story. She shared an image of herself an embarrassing image of herself that haunted her for decades afterwards. She went to the website. That was that was displaying this and told them this is something I want to take down. It is embarrassment to me. It happened when I was a little girl and still I’m living with it even today. They knew that it was on this website because this young woman and her family proved it, and yet they did nothing, nothing let him continue to play this exploitation over and over again.
Why how to get away with that they asked, and many people asked, I thought we had laws in this country protecting children what’s going on? Well, there’s a Section 230 which basically absolves these companies these media companies from responsibility for what is displayed on their websites on their social media pages. And that’s exactly what we change here.
Again, none of this makes any sense. If the imagery was actually CSAM, then that’s very much illegal and Section 230 has nothing to do with it. Durbin should then be asking why the DOJ isn’t taking action.
From the vague and non-specific description again, it sounds like this wasn’t actually CSAM, but rather simply “embarrassing” content. But “embarrassing” content is not against the law, and thus, this law still wouldn’t make any difference at all, because the content was legal.
So what situation does this law actually solve for? It’s not one involving Section 230 at all.
We say something basic and fundamental. If the social media site knowingly and intentionally continued to display these images, they’re subject to civil liability. They can be sued. Want to change this scene in a hurry? Turn the lawyers loose on them. Let them try to explain why they have no responsibility to that young woman who’s been exploited for decades. That’s what my bill works on. I’m happy to have co-sponsorship with Senator Graham and others. We believe that these bills this package of bill should come to the floor today.
Again, if it’s actually CSAM then it’s a criminal issue and the responsibility is on law enforcement. Why isn’t Durbin asking why law enforcement did nothing? Furthermore, all the major companies will report actual CSAM to NCMEC’s cybertip line, and most, if not all, of them will use some form of Microsoft’s PhotoDNA to identify repeats of the content.
So, if it’s true that this young woman had exploitative imagery being passed around, as Durbin claims, it sounds like either (1) it wasn’t actually illegal, in which case this bill would do nothing, or (2) there was a real failing of law enforcement and/or by NCMEC and PhotoDNA. It’s not at all clear how “turning the lawyers loose” for civil lawsuits fixes anything about that issue.
Again, Durbin seems to wholly misunderstand Section 230, issues related to CSAM, and how modern internet companies work. It’s not even clear from his speech that he understands the various issues. He switches at times from talk of suicide to embarrassing imagery to CSAM, without noting the fairly big differences between them all.
And now he wants to get rid of Section 230 entirely? Why?
The Communications Daily story about Durbin’s plans also has some ridiculous commentary from other senators, including Richard Blumenthal, who never misses an opportunity to be the wrongest senator about the internet.
Passing kids’ online safety legislation is more realistic than obtaining a Section 230 repeal, Senate Privacy Subcommittee Chairman Richard Blumenthal, D-Conn., told us in response to Graham’s plans. Blumenthal introduced the Kids Online Safety Act with Sen. Marsha Blackburn, R-Tenn., …“Passing a repeal of Section 230, which I strongly favor, is far more problematic than passing the Kids Online Safety Act (KOSA), which has almost two-thirds of the Senate sponsoring,” said Blumenthal. “I will support repealing Section 230, but I think the more viable path to protecting children, as a first step, is to pass the Kids Online Safety Act.”
Of course Blumenthal hates 230 and wants it repealed. He’s never understood the internet. This goes all the way back to when he was Attorney General of Connecticut. He thought that he should be able to sue Craigslist for prostitution and blamed Section 230 for not letting him do so.
There are other dumb 230 quotes from others, including Chuck Grassley and Ben Ray Lujan (who is usually better than that), but the dumbest of all goes to Senator Marco Rubio:
Section 230 immunity hinges on the question of how much tech platforms are controlling editorial discretion, Senate Intelligence Committee ranking member Marco Rubio, R-Fla., told us. “Are these people forums or are they exercising editorial controls that would make them publishers?” he said. “I think there are very strong arguments that they’re exercising editorial control.”
I know that a bunch of very silly people are convinced this is how Section 230 works, but it’s the opposite of this. The entire point of Section 230 is that it protects websites from liability for their editorial decision making. That’s it. That’s why 230 was passed. There is no “exercising editorial control” loophole that makes Section 230 not apply because the entire point of the law was to enable websites to feel free to exercise editorial control to create communities they wanted to support.
Rubio should know this, but so should the reporter for Communications Daily, Karl Herchenroeder, who wrote the above paragraph as if it was accurate, rather than completely backwards. Section 230 does not “hinge” on “how much tech platforms are controlling editorial discretion.” It hinges on “is this an interactive computer service or a user of such a service” and “is the content created by someone else.” That’s it. That’s the analysis. Editorial discretion has fuck all to do with it. And we’ve known this for decades. Anyone saying otherwise is ignorant or lying.
In the year 2024, it is beyond ridiculous that so many senators do not understand Section 230 and just keep misrepresenting it, to the point of wishing to repeal it (and with it, the open internet).
Filed Under: ben ray lujan, chuck grassley, dick durbin, lindsey graham, marco rubio, richard blumenthal, section 230, stop csam
Once Again, Google Caves To Political Pressure And Supports Questionable STOP CSAM Law
from the playing-political-games dept
It’s not surprising, but still disappointing, to see companies like Google and Meta, which used to take strong stands against bad laws, now showing a repeated willingness to cave on such principles in the interests of appeasing policymakers. It’s been happening a lot in the last few years and it’s happened again as Google has come out (on ExTwitter of all places) to express support for a mixed batch of “child safety” bills.
If you can’t see that screenshot, they are tweets from the Google Public Policy team, stating:
Protecting kids online is a top priority—and demands both strong legislation and responsible corporate practices to make sure we get it right.
We support several important bipartisan bills focused on online child safety, including the Invest in Child Safety Act, the Project Safe Childhood Act, the Report Act, the Shield Act, and the STOP CSAM Act.
We’ve talked about a couple of these bills. The Invest in Child Safety Act seems like a good one, from Senator Ron Wyden, as it focuses the issue where it belongs: on law enforcement. That is, rather than blaming internet companies for not magically stopping criminals, it equips law enforcement to better do its job.
The Shield Act is about stopping the sharing of nonconsensual sexual images and seems mostly fine, though I’ve seen a few concerns raised on the margins about how some of the language might go too far in criminalizing activities that shouldn’t be criminal. According to Senator Cory Booker last week, he’s been working with Senator Klobuchar on fixing those problematic parts.
And the Project Safe Childhood Act also seems perfectly fine. In many ways it complements the Invest in Child Safety Act in that it’s directed at law enforcement and focused on getting law enforcement to be better about dealing with child sexual abuse material, coordinating with other parts of law enforcement, and submitting seized imagery to NCMEC’s cybertip line.
But, then there’s the STOP CSAM bill. As we’ve discussed, there are some good ideas in that bill, but they’re mixed with some problematic ones. And, some of the problematic ones are a backdoor attack on encryption. Senator Dick Durbin, the author of the bill, went on a rant about Section 230 last week in trying to get the bill through on unanimous consent, which isn’t great either, and suggests some issues with the bill.
In that rant, he talks about how cell phones are killing kids because of “some crazy person on the internet.” But, um, if that’s true, it’s a law enforcement issue and “the crazy person on the internet” should face consequences. But Durbin insists that websites should somehow magically stop the “crazy person on the internet” from saying stuff. That’s a silly and mistargeted demand.
In that rant, he also talked about the importance of “turning the lawyers loose” on the big tech companies to sue them for what their users posted.
You’d think that that would be a reason for a company like Google to resist STOP CSAM, knowing it’ll face vexatious litigation. But, for some reason, it is now supporting the bill.
Lots of people have been saying that Durbin has a new, better version of STOP CSAM, and I’ve seen a couple drafts that are being passed around. But the current version of the bill still has many problems. Maybe Google is endorsing a fixed version of the bill, but if so, it sure would be nice if the rest of us could see it.
In the meantime, Durbin put out a gloating press release about Google’s support.
“For too long, Big Tech used every trick in the book to halt legislation holding social media companies accountable, while still trying to win the PR game. I’m glad to see that some tech companies are beginning to make good on their word to work with Congress on meaningful solutions to keep children safe online. I encourage other tech companies to follow Google’s move by recognizing that the time for Big Tech to police itself is over and work with Congress to better protect kids.”
Can’t say I understand Google’s reasons for caving here. I’m sure there’s some political calculus in doing so. And maybe they have the inside scoop on a fixed version of Durbin’s bill. But to do so the day after he talks about “turning the lawyers loose” on websites for failing to magically stop people from saying stuff… seems really strange.
It seems increasingly clear that both Meta and Google, with their buildings full of lawyers, have decided that the strategic political move is to embrace some of these laws, even as they know they’ll get hit with dumb lawsuits over them. They feel they can handle the lawsuits and, as a bonus, they know that smaller upstart competitors will probably have a harder time.
Still, there was a time when Google stood on principle and fought bad bills. That time seems to have passed.
Filed Under: dick durbin, encryption, liability, section 230, stop csam
Companies: google
Once Again, Ron Wyden Had To Stop Bad “Protect The Children” Internet Bills From Moving Forward
from the saving-the-internet dept
Senator Ron Wyden is a one-man defense for preventing horrible bills from moving forward in the Senate. Last month, he stopped Josh Hawley from moving a very problematic STOP CSAM bill from moving forward, and now he’s had to do it again.
A (bipartisan) group of senators traipsed to the Senate floor Wednesday evening. They tried to skip the line and quickly move some bad bills forward by asking for unanimous consent. Unless someone’s there to object, it effectively moves the bill forward, ending committee debate about it. Traditionally, this process is used for moving non-controversial bills, but lately it’s been used to grandstand about stupid bills.
Senator Lindsey Graham announced his intention to pull this kind of stunt on bills that he pretends are about “protecting the children” but which do no such thing in reality. Instead of it being just him, he rounded up a bunch of senators and they all pulled out the usual moral panic lines about two terrible bills: EARN IT and STOP CSAM. Both bills are designed to make it sound like good ideas and about protecting children, but the devil is very much in the detail, as both bills undermine end-to-end encryption while assuming that if you just put liability on websites, they’ll magically make child predators disappear.
And while both bills pretend not to attack encryption — and include some language about how they’re not intended to do so — both of them leave open the possibility that the use of end-to-end encryption will be used as evidence against websites for bad things done on those websites.
But, of course, as is the standard for the group of grandstanding senators, they present these bills as (1) perfect and (2) necessary to “protect the children.” The problem is that the bills are actually (1) ridiculously problematic and (2) will actually help bad people online in making end-to-end encryption a liability.
The bit of political theater kicked off with Graham having Senators Grassley, Cornyn, Durbin, Klobuchar, and Hawley talk on and on about the poor kids online. Notably, none of them really talked about how their bills worked (because that would reveal how the bills don’t really do what they pretend they do). Durbin whined about Section 230, misleadingly and mistakenly blaming it for the fact that bad people exist. Hawley did the thing that he loves doing, in which he does his mock “I’m a big bad Senator taking on those evil tech companies” schtick, while flat out lying about reality.
But Graham closed it out with the most misleading bit of all:
In 2024, here’s the state of play: the largest companies in America — social media outlets that make hundreds of billions of dollars a year — you can’t sue if they do damage to your family by using their product because of Section 230
This is a lie. It’s a flat out lie and Senator Graham and his staffers know this. All Section 230 says is that if there is content on these sites that violate the law, the liability goes after whoever created the content. If the features of the site itself “do damage,” then you can absolutely sue the company. But no one is actually complaining about the features. They’re complaining about content. And the liability on the content has to go to who created it.
The problem here is that Graham and all the other senators want to hold companies liable for the speech of users. And that is a very, very bad idea.
Now these platforms enrich our lives, but they destroy our lives.
These platforms are being used to bully children to death.
They’re being used to take sexual images and voluntarily and voluntarily obtain and sending them to the entire world. And there’s not a damn thing you can do about it. We had a lady come before the committee, a mother saying that her daughter was on a social media site that had an anti-bullying provisions. They complained three times about what was happening to her daughter. She killed herself. They went to court. They got kicked out by section 230.
I don’t know the details of this particular case, but first off, the platforms didn’t bully anyone. Other people did. Put the blame on the people actually causing the harm. Separately, and importantly, you can’t blame someone’s suicide on someone else when no one knows the real reasons. Otherwise, you actually encourage increased suicides, as it gives people an ultimate way to “get back” at someone.
Senator Wyden got up and, as he did last month, made it quite clear that we need to stop child sexual abuse and predators. He talked about his bill, which would actually help on these issues by giving law enforcement the resources it needs to go after the criminals, rather than the idea of the bills being pushed that simply blame social media companies for not magically making bad people disappear.
We’re talking about criminal issues, and Senator Wyden is looking to handle it by empowering law enforcement to deal with the criminals. Senators Graham, Durbin, Grassley, Cornyn, Klobuchar, and Hawley are looking to sue tech companies for not magically stopping criminals. One of those approaches makes sense for dealing with criminal activity. And yet it’s the other one that a bunch of senators have lined up behind.
And, of course, beyond the dangerous approach of EARN IT, it inherently undermines encryption, which makes kids (and everyone) less safe, as Wyden also pointed out.
Now, the specific reason I oppose EARN It is it will weaken the single strongest technology that protects children and families online. Something known as strong encryption.
It’s going to make it easier to punish sites that use encryption to secure private conversations and personal devices. This bill is designed to pressure communications and technology companies to scan users messages.
I, for one, don’t find that a particularly comforting idea.
Now, the sponsors of the bill have argued — and Senator Graham’s right, we’ve been talking about this a while — that their bills don’t harm encryption. And yet the bills allow courts to punish companies that offer strong encryption.
In fact, while it includes some they language about protecting encryption, it explicitly allows encryption to be used as evidence for various forms of liability. Prosecutors are going to be quick to argue that deploying encryption was evidence of a company’s negligence preventing the distribution of CSAM, for example.
The bill is also designed to encourage scanning of content on users phones or computers before information is sent over the Internet which has the same consequences as breaking encryption. That’s why a hundred civil society groups including the American Library Association — people then I think all of us have worked for — Human Rights Campaign, the list goes… Restore the Fourth. All of them oppose this bill because of its impact on essential security.
Weakening encryption is the single biggest gift you can give to these predators and these god-awful people who want to stalk and spy on kids. Sexual predators are gonna have a far easier time stealing photographs of kids, tracking their phones, and spying on their private messages once encryption is breached. It is very ironic that a bill that’s supposed to make kids safer would have the effect of threatening the privacy and security of all law-abiding Americans.
My alternative — and I want to be clear about this because I think Senator Graham has been sincere about saying that this is a horrible problem involving kids. We have a disagreement on the remedy. That’s what is at issue.
And what I want us to do is to focus our energy on giving law enforcement officials the tools they need to find and prosecute these monstrous criminals responsible for exploiting kids and spreading vile abuse materials online.
That can help prevent kids from becoming victims in the first place. So I have introduced to do this: the Invest in Child Safety Act to direct five billion dollars to do three specific things to deal with this very urgent problem.
Graham then gets up to respond and lies through his teeth:
There’s nothing in this bill about encryption. We say that this is not an encryption bill. The bill as written explicitly prohibits courts from treating encryption as an independent basis for liability.
We’re agnostic about that.
That’s not true. As Wyden said, the bill has some hand-wavey language about not treating encryption as an independent basis for liability, but it does explicitly allow for encryption to be one of the factors that can be used to show negligence by a platform, as long as you combine it with other factors.
Section (7)(A) is the hand-wavey bit saying you can’t use encryption as “an independent basis” to determine liability, but (7)(B) effectively wipes that out by saying nothing in that section about encryption “shall be construed to prohibit a court from considering evidence of actions or circumstances described in that subparagraph.” In other words, you just have to add a bit more, and then can say “and also, look, they use encryption!”
And another author of the bill, Senator Blumenthal, has flat out said that EARN IT is deliberately written to target encryption. He falsely claims that companies would “use encryption… as a ‘get out of jail free’ card.” So, Graham is lying when he says encryption isn’t a target of the bill. One of his co-authors on the bill admits otherwise.
Graham went on:
What we’re trying to do is hold these companies accountable by making sure they engage in best business practices. The EARN IT acts simply says for you to have liability protections, you have to prove that you’ve tried to protect children. You have to earn it. You’re just not given to you. You have to have the best business practices in place that voluntary commissions that lay out what would be the best way to harden these sites against sexually exploitation. If you do those things you get liability, it’s just not given to you forever. So this is not about encryption.
As to your idea. I’d love to talk to you about it. Let’s vote on both, but the bottom line here is there’s always a reason not to do anything that holds these people liable. That’s the bottom line. They’ll never agree to any bill that allows you to get them in court ever. If you’re waiting on these companies to give this body permission for the average person to sue you. It ain’t never going to happen.
So… all of that is wrong. First of all, the very original version of the EARN IT Act did have provisions to make company’s “earn” 230 protections by following best practices, but that’s been out of the bill for ages. The current version has no such thing.
The bill does set up a commission to create best practices, but (unlike the earlier versions of the bill) those best practice recommendations have no legal force or requirements. And there’s nothing in the bill that says if you follow them you get 230 protections, and if you don’t, you don’t.
Does Senator Graham even know which version of the bill he’s talking about?
Instead, the bill outright modifies Section 230 (before the Commission even researches best practices) and says that people can sue tech companies for the distribution of CSAM. This includes using the offering of encryption as evidence to support the claims that CSAM distribution was done because of “reckless” behavior by a platform.
Either Senator Graham doesn’t know what bill he’s talking about (even though it’s his own bill) or he doesn’t remember that he changed the bill to do something different than it used to try to do.
It’s ridiculous that Senator Wyden remains the only senator who sees this issue clearly and is willing to stand up and say so. He’s the only one who seems willing to block the bad bills while at the same time offering a bill that actually targets the criminals.
Filed Under: amy klobuchar, chuck grassley, csam, dick durbin, earn it, encryption, invest in child safety, john cornyn, josh hawley, lindsey graham, ron wyden, shield act, stop csam, unanimous consent
The STOP CSAM Act Is An Anti-Encryption Stalking Horse
from the durbin,-this-is-distrubin dept
Recently, I wrote for Lawfare about Sen. Dick Durbin’s new STOP CSAM Act bill, S.1199. The bill text is available here. There are a lot of moving parts in this bill, which is 133 pages long. (Mike valiantly tries to cover them here.) I am far from done with reading and analyzing the bill language, but already I can spot a couple of places where the bill would threaten encryption, so those are what I’ll discuss today.
According to Durbin, online service providers covered by the bill would have “to produce annual reports detailing their efforts to keep children safe from online sex predators, and any company that promotes or facilitates online child exploitation could face new criminal and civil penalties.” Child safety online is a worthy goal, as is improving public understanding of how influential tech companies operate. But portions of the STOP CSAM bill pose risks to online service providers’ ability to use end-to-end encryption (E2EE) in their service offerings.
E2EE is a widely used technology that protects everyone’s privacy and security by encoding the contents of digital communications and files so that they’re decipherable only by the sender and intended recipients. Not even the provider of the E2EE service can read or hear its users’ conversations. E2EE is built in by default to popular apps such as WhatsApp, iMessage, FaceTime, and Signal, thereby securing billions of people’s messages and calls for free. Default E2EE is also set to expand to Meta’s Messenger app and Instagram direct messages later this year.
E2EE’s growing ubiquity seems like a clear win for personal privacy, security, and safety, as well as national security and the economy. And yet E2EE’s popularity has its critics – including, unfortunately, Sen. Durbin. Because it’s harder for providers and law enforcement to detect malicious activity in encrypted environments than unencrypted ones (albeit not impossible, as I’ll discuss), law enforcement officials and lawmakers often demonize E2EE. But E2EE is a vital protection against crime and abuse, because it helps to protect people (children included) from the harms that happen when their personal information and private conversations fall into the wrong hands: data breaches, hacking, cybercrime, snooping by hostile foreign governments, stalkers and domestic abusers, and so on.
That’s why it’s so important that national policy promote rather than dissuade the use ofE2EE – and why it’s so disappointing that STOP CSAM has turned out to be just the opposite: yet another misguided effort by lawmakers in the name of online safety that would only make us all less safe.
First, STOP CSAM’s new criminal and civil liability provisions could be used to hold E2EE services liable for CSAM and other child sex offenses that happen in encrypted environments. Second, the reporting requirements look like a sneaky attempt to tee up future legislation to ban E2EE outright.
STOP CSAM’s New Civil and Criminal Liability for Online Service Providers
Among the many, many things it does in 133 pages, STOP CSAM creates a new federal crime, “liability for certain child exploitation offenses.” It also creates new civil liability by making a carve-out from Section 230 immunity to allow child exploitation victims to bring lawsuits against the providers of online services, as well as the app stores that make those services available. Both of these new forms of liability, criminal and civil, could be used to punish encrypted services in court.
The new federal crime is for a provider of an interactive computer service (an ICS provider, as defined in Section 230) “to knowingly (1) host or store child pornography or make child pornography available to any person; or (2) otherwise knowingly promote or facilitate a violation of” certain federal criminal statutes that prohibit CSAM and child sexual exploitation (18 U.S.C. §§ 2251, 2251A, 2252, 2252A, or 2422(b)).
This is rather duplicative: It’s already illegal under those laws to knowingly possess CSAM or knowingly transmit it over the Internet. That goes for online service providers, too. So if there’s an online service that “knowingly hosts or stores” or transmits or “makes available” CSAM (whether on its own or by knowingly letting its users do so), that’s already a federal crime under existing law, and the service can be fined.
So why propose a new law that says “this means you, online services”? It’s the huge size of the fines that could be imposed on providers: up to 1million,or1 million, or 1million,or5 million if the provider’s conduct either causes someone to be harmed or “involves a conscious or reckless risk of serious personal injury.” Punishing online service providers specifically with enormous fines, for their users’ child sex offenses, is the point of re-criminalizing something that’s already a crime.
The new civil liability for providers comes from removing Section 230’s applicability to civil lawsuits by the victims of CSAM and other child sexual exploitation crimes. There’s a federal statute, 18 U.S.C. § 2255, that lets those victims sue the perpetrator(s). Section 230 currently bars those lawsuits from being brought against providers. That is, Congress has heretofore decided that if online services commit the aforementioned child sex offenses, the sole enforcer should be the Department of Justice, not civil plaintiffs. STOP CSAM would change that. (More about that issue here.)
Providers would now be fair game for 2255 lawsuits by child exploitation victims. Victims could sue for “child exploitation violations” under an enumerated list of federal statutes. They could also sue for “conduct relating to child exploitation.” That phrase is defined with respect to two groups: ICS providers (as defined by Section 230), and “software distribution services” (think: app stores, although the definition is way broader than that).
Both ICS providers and software distribution services could be sued for one type of “conduct relating to child exploitation”: “the intentional, knowing, reckless, or negligent promotion or facilitation of conduct that violates” an enumerated list of federal child exploitation statutes. And, ICS providers alone (but not software distribution services) could be sued for a different type of conduct: “the intentional, knowing, reckless, or negligent hosting or storing of child pornography or making child pornography available to any person.”
So, to sum up: STOP CSAM
(1) creates a new crime when ICS providers knowingly promote or facilitate CSAM and child exploitation crimes, and
(2) exposes ICS providers to civil lawsuits by child exploitation victims if they intentionally, knowingly, recklessly, or negligently (a) host/store/make CSAM available, or (b) promote or facilitate child exploitation conduct (for which app stores can be liable too).
Does E2EE “Promote or Facilitate” Child Exploitation Offenses?
Here, then, is the literally million-dollar question: Do E2EE service providers “promote or facilitate” CSAM and other child exploitation crimes, by making their users’ communications unreadable by the provider and law enforcement?
It’s not clear what “promote or facilitate” even means! That same phrase is also found in a 2018 law, SESTA/FOSTA, that carved out sex trafficking offenses from providers’ general immunity against civil lawsuits and state criminal charges under Section 230. And that same phrase is currently being challenged in court as unconstitutionally vague and overbroad. Earlier this year, a panel of federal appeals judges appeared skeptical of its constitutionality at oral argument, but they haven’t issued their written opinion yet. Why Senator Durbin thinks it’s a great idea to copy language that’s on the verge of being held unconstitutional, I have no clue.
If a court were to hold that E2EE services “promote or facilitate” child sex offenses (whatever that means), then the E2EE service provider’s liability would turn on whether the case was criminal or civil. If it’s criminal, then federal prosecutors would have to prove the service knowingly promoted or facilitated the crime by being E2EE. “Knowing” is a pretty high bar to prove, which is appropriate for a crime.
In a civil lawsuit, however, there are four different mental states the plaintiff could choose from. Two of them – recklessness or negligence – are easier to prove than the other two (knowledge or intent). They impose a lower bar to establishing the defendant’s liability in a civil case than the DOJ would have to meet in a federal criminal prosecution. (See here for a discussion of these varying mental-state standards, with helpful charts.)
Is WhatsApp negligently facilitating child exploitation because it’s E2EE by default? Is Zoom negligently facilitating child exploitation because users can choose to make a Zoom meeting E2EE? Are Apple and Google negligently facilitating child exploitation by including WhatsApp, Zoom, and other encrypted apps in their app stores? If STOP CSAM passes, we could expect plaintiffs to immediately sue all of those companies and argue exactly that in court.
That’s why STOP CSAM creates a huge disincentive against offering E2EE. It would open up E2EE services to a tidal wave of litigation by child exploitation victims for giving all their users a technology that is indispensable to modern cybersecurity and data privacy. The clear incentive would be for E2EE services to remove or weaken their end-to-end encryption, so as to make it easier to detect child exploitation conduct by their users, in the hopes that they could then avoid being deemed “negligent” on child safety because, ironically, they used a bog-standard cybersecurity technology to protect their users.
It is no accident that STOP CSAM would open the door to punishing E2EE service providers. Durbin’s February press release announcing his STOP CSAM bill paints E2EE as antithetical to child safety. The very first paragraph predicts that providers’ continued adoption of E2EE will cause a steep reduction in the volume of (already mandated) reports of CSAM they find on their services. It goes on to suggest that deploying E2EE treats children as “collateral damage,” framing personal privacy and child safety as flatly incompatible.
The kicker is that STOP CSAM never even mentions the word “encryption.” Even the EARN IT Act – a terrible bill that I’ve decried at great length, which was reintroduced in the Senate on the same day as STOP CSAM – has a weak-sauce provision that at least kinda tries halfheartedly to protect encryption from being the basis for provider liability. STOP CSAM doesn’t even have that!
Teeing Up a Future E2EE Ban
Even leaving aside the “promote or facilitate” provisions that would open the door to an onslaught of litigation against the providers of E2EE services, there’s another way in which STOP CSAM is sneakily anti-encryption: by trying to get encrypted services to rat themselves out to the government.
The STOP CSAM bill contains mandatory transparency reporting provisions, which, as my Lawfare piece noted, have become commonplace in the recent bumper crop of online safety bills. The transparency reporting requirements apply to a subset of the online service providers that are required to report CSAM they find under an existing federal law, 18 U.S.C. § 2258A. (That law’s definition of covered providers has a lot of overlap, in practice, with Section 230’s “ICS provider” definition. Both of these definitions plainly cover apps for messaging, voice, and video calls, whether they’re E2EE or not.) In addition to reporting the CSAM they find, those covered providers would also separately have to file annual reports about their efforts to protect children.
Not every provider that has to report CSAM would have to file these annual reports, just the larger ones: specifically, those with at least one million unique monthly visitors/users and over $50 million in annual revenue. That’s a distinction from the “promote or facilitate” liability discussed above, which doesn’t just apply to the big guys.
Covered providers must file an annual report with the Attorney General and the Federal Trade Commission that provides information about (among other things) the provider’s “culture of safety.” This means the provider must describe and assess the “measures and technologies” it employs for protecting child users and keeping its service from being used to sexually abuse or exploit children.
In addition, the “culture of safety” report must also list “[f]actors that interfere with the provider’s ability to detect or evaluate instances of child sexual exploitation and abuse,” and assess those factors’ impact.
That provision set off alarm bells in my head. I believe this reporting requirement is intended to force providers to cough up internal data and create impact assessments, so that the federal government can then turn around and use that information as ammunition to justify a subsequent legislative proposal to ban E2EE.
This hunch arises from Sen. Durbin’s own framing of the bill. As I noted above, his February press release about STOP CSAM spends its first two paragraphs claiming that E2EE would “turn off the lights” on detecting child sex abuse online. Given this framing, it’s pretty straightforward to conclude that the bill’s “interfering factors” report requirement has E2EE in mind.
So: In addition to opening the door to civil and/or criminal liability for E2EE services without ever mentioning the word “encryption” (as explained above), STOP CSAM is trying to lay the groundwork for justifying a later bill to more overtly ban providers from offering E2EE at all.
But It’s Not That Simple, Durbin
There’s no guarantee this plan will succeed, though. If this bill passes, I’m skeptical that its ploy to fish for evidence against E2EE will play out as intended, because it rests on a faulty assumption. The policy case for outlawing or weakening E2EE rests on the oft–repeated premise that online service providers can’t fight abuse unless they can access the contents of users’ files and communications at will, a capability E2EE impedes. However, my own research has proved this assumption untrue.
Last year, I published a peer-reviewed article analyzing the results of a survey I conducted of online service providers, including some encrypted messaging services. Many of the participating providers would likely be covered by the STOP CSAM bill. The survey asked participants to describe their trust and safety practices and rank how useful they were against twelve different categories of online abuse. Two categories pertained to child safety: CSAM and child sexual exploitation (CSE) such as grooming and enticement.
My findings show that CSAM is distinct from other kinds of online abuse. What currently works best to detect CSAM isn’t what works best against other abuse types, and vice versa. For CSAM, survey participants considered scanning for abusive content to be more useful than other techniques (user reporting and metadata analysis) that — unlike scanning — don’t rely on at-will provider access to user content. However, that wasn’t true of any other category of abuse — not even other child safety offenses.
For detecting CSE, user reporting and content scanning were considered equally useful for abuse detection. In most of the remaining 10 abuse categories, user reporting was deemed more useful than any other technique. Many of those categories (e.g., self-harm and harassment) affect children as well as adults online. In short, user reports are a critically important tool in providers’ trust and safety toolbox.
Here’s the thing: User reporting — the best weapon against most kinds of abuse, according to providers themselves — can be, and is, done in E2EE environments. That means rolling out E2EE doesn’t nuke a provider’s abuse-fighting capabilities. My research debunks that myth.
My findings show that E2EE does not affect a provider’s trust and safety efforts uniformly; rather, E2EE’s impact will likely vary depending on the type of abuse in question. Even online child safety is not a monolithic problem (as was cogently explained in another recent report by Laura Draper of American University). There’s simply no one-size-fits-all answer to solving online abuse.
From these findings, I conclude that policymakers should not pass laws regulating encryption and the Internet based on the example of CSAM alone, because CSAM poses such a unique challenge.
And yet that’s just what I suspect Sen. Durbin has in mind: to collect data about one type of abusive content as grounds to justify a subsequent law banning providers from offering E2EE to their users. Never mind that such a ban would affect all content and all users, whether abusive or not.
That’s an outcome we can’t afford. Legally barring providers from offering strong cybersecurity and privacy protections to their users wouldn’t keep children safe; it would just make everybody less safe, children included. As a recent report from the Child Rights International Network and DefendDigitalMe describes, while E2EE can be misused, it is nevertheless a vital tool for protecting the full range of children’s rights, from privacy to free expression to protection from violence (including state violence and abusive parents). That’s in addition to the role strong encryption plays in protecting the personal data, financial information, sensitive secrets, and even bodily safety of domestic violence victims, military servicemembers, journalists, government officials, and everyone in between.
Legislators’ tunnel-vision view of E2EE as nothing but a threat requires casting all those considerations aside — treating them as “collateral damage,” to borrow Sen. Durbin’s phrase. But the reality is that billions of people use E2EE services every day, of whom only a tiny sliver use them for harm — and my research shows that providers have other ways to deal with those bad actors. As I conclude in my article, anti-E2EE legislation just makes no sense.
Given the crucial importance of strong encryption to modern life, Sen. Durbin shouldn’t expect the providers of popular encrypted services to make it easy for him to ban it. Those major players covered by the STOP CSAM bill? They have PR departments, lawyers, and lobbyists. Those people weren’t born yesterday. If I can spot a trap, so can they. The “culture of safety” reporting requirements are meant to give providers enough rope to hang themselves. That’s like a job interviewer asking a candidate what their greatest weakness is and expecting a raw and damning response. The STOP CSAM bill may have been crafted as a ticking time bomb for blowing up encryption, but E2EE service providers won’t be rushing to light the fuse.
From my research, I know that providers’ internal child-safety efforts are too complex to be reduced to a laundry list of positives and negatives. If forced to submit the STOP CSAM bill’s mandated reports, providers will seize upon the opportunity to highlight how their E2EE services help protect children and describe how their panoply of abuse-detection measures (such as user reporting) help to mitigate any adverse impact of E2EE. While its opponents try to caricature E2EE as a bogeyman, the providers that actually offer E2EE will be able to paint a fuller picture.
Will It Even Matter What Providers’ “Culture of Safety” Reports Say?
Unfortunately, given how the encryption debate has played out in recent years, we can expect Congress and the Attorney General (a role recently held by vehemently anti-encryption individuals) to accuse providers of cherry-picking the truth in their reports. And they’ll do so even while they themselves cherry-pick statistics and anecdotes that favor their pre-existing agenda.
I’m basing that prediction on my own experience of watching my research, which shows that online trust and safety is compatible with E2EE, get repeatedly cherry-picked by those trying to outlaw E2EE. They invariably highlight my anomalous findings regarding CSAM while leaving out all the other findings and conclusions that are inconvenient to their false narrative that E2EE wholly precludes trust and safety enforcement. As an academic, I know I can’t control how my work product gets used. But that doesn’t mean I don’t keep notes on who’s misusing it and why.
Providers can offer E2EE and still effectively combat the misuse of their services. Users do not have to accept intrusive surveillance as the price of avoiding untrammeled abuse, contrary to what anti-encryption officials like Sen. Durbin would have us believe.
If the STOP CSAM bill passes and its transparency reporting provisions go into effect, providers will use them to highlight the complexity of their ongoing efforts against online child sex abuse, a problem that is as old as the Internet. The question is whether that will matter to congressmembers who have already made up their minds about the supposed evils of encryption and the tech companies that offer it — or whether those annual reports were always intended as an exercise in futility.
What’s Next for the STOP CSAM Bill?
It took two months after that February press release for Durbin to actually introduce the bill in mid-April, and it took even longer for the bill text to actually appear on the congressional bill tracker. Durbin chairs the Senate Judiciary Committee, where the bill was supposed to be considered in committee meetings during each of the last two weeks, but it got punted out both times. Now, the best guess is that it will be discussed and marked up this coming Thursday, May 4. However, it’s quite possible it will get delayed yet again. On the one hand, Durbin as the committee chair has a lot of power to move his own bill along; on the other hand, he hasn’t garnered a single co-sponsor yet, and might take more time to get other Senators on board before bringing it to markup.
I’m heartened that Durbin hasn’t gotten any co-sponsors and has had to slow-roll the bill. STOP CSAM is very dense, it’s very complicated, and in its current form, it poses a huge threat to the security and privacy of the Internet by dissuading E2EE. There may be some good things in the bill, as Mike wrote, but at 133 pages long, it’s hard to figure out what the bill actually does and whether those would be good or bad outcomes. I’m sure I’ll be writing more about STOP CSAM as I continue to read and digest it. Meanwhile, if you have any luck making sense of the bill yourself, and your Senator is on the Judiciary Committee, contact their office and let them know what you think.
Riana Pfefferkorn is a Research Scholar at the Stanford Internet Observatory. A version of this piece originally appeared on the Stanford CIS blog.
Filed Under: dick durbin, encryption, end to end encryption, liability, section 230, stop csam act
Senator Durbin’s ‘STOP CSAM Act’ Has Some Good Ideas… Mixed In With Some Very Bad Ideas That Will Do More Harm Than Good
from the taking-away-230-doesn't-protect-kids dept
It’s “protect the children” season in Congress with the return of KOSA and EARN IT, two terrible bills that attack the internet, and rely on people’s ignorance of how things actually work to pretend they’re making the internet safer, when they’re not. Added to this is Senator Dick Durbin’s STOP CSAM Act, which he’s been touting since February, but only now has officially put out a press release announcing the bill (though, he hasn’t released the actual language of the bill, because that would actually be helpful to people analyzing it).
CSAM is “child sexual abuse material,” though because every bill needs a dumb acronym, in this case it’s the Strengthening Transparency and Obligation to Protect Children Suffering from Abuse and Mistreatment Act of 2023.
There is a section by section breakdown of the bill, though, along with a one pager summary. And, given how bad so many other internet “protect the children” bills there are, this one is… not nearly as bad. It actually has a few good ideas, but also a few really questionable bits. Also, the framing of the whole thing is a bit weird:
From March 2009 to February 2022, the number of victims identified in child sexual abuse material (CSAM) rose almost ten-fold, from 2,172 victims to over 21,413 victims. From 2012 to 2022, the volume of reports to the National Center for Missing & Exploited Children’s CyberTipline concerning child sexual exploitation increased by a factor of 77 (415,650 reports to over 32 million reports).
Clearly, any child sexual abuse material is too much, but it’s not at all clear to me that the numbers represented here show an actual increase in victims of child sexual abuse material, or merely a much bigger infrastructure and setup for reporting CSAM material. I mean, from March of 2009 to February of 2022 is basically the period in which social media went mainstream, and with it, much better tracking and reporting of such material.
I mean, back in March of 2009, the tools to track, find and report CSAM were in their infancy. Facebook didn’t start using PhotoDNA (which was only developed in 2009) until the middle of 2011. It’s unclear when Google started using it as well, but this announcement suggests it was around 2013 — noting that “recently” the company started using “encrypted “fingerprints” of child sexual abuse images into a cross-industry database” (which describes PhotoDNA).
This is what’s frustrating in all of this. For years, there were complaints that these companies didn’t report enough CSAM, so they built better tools that found more… and now the media and politicians are assuming that the increase in reporting means an increase in actual victimization. Yet, it’s unclear if that’s actually the case. It’s just as (if not more) likely that since the companies are getting better at finding and reporting, that this is just presenting a more accurate number of what’s out there, and not any indication of whether or not the problem has grown.
Notice what’s not talked about? It’s not mentioned how much law enforcement has done to actually track down, arrest, and prosecute the perpetrators. That’s the stat that matters. But it’s missing.
Anyway, again, stopping CSAM remains important, and there are some good things in Durbin’s outline (though, again, specific language matters). It will make reporting mandatory for youth athletic programs, which is a response to a few recent scandals (though, might also lead to an increase in false reports). It increases protections for child victims and witnesses. Another good thing it does is make it easier for states to set up Internet Crimes Against Children (ICAC) task forces, which specialize in fighting child abuse, and which can be helpful for local law enforcement who are often less experienced in how to deal with such crimes.
The law also expands the reporting requirements for online providers, who are already required to report any CSAM they come across, but this expands that coverage by a bit, and increases the amount of information the sites need to provide. It makes at least some move towards making those reports more useful to law enforcement by authorizing NCMEC to share a copy of an image with local law enforcement from its database.
Considering that, as we keep reporting, the biggest issue with CSAM these days is that law enforcement does so little with the information reported to NCMEC’s CyberTipline, hopefully these moves actually help on the one key important area: having law enforcement bring the actual perpetrators to justice and stop them from victimizing children.
But… there remain some pretty serious concerns with the bill. It appears to crack open Section 230, allowing “victims” to sue social media companies:
The legislation expands 18 U.S.C. § 2255, which currently provides a civil cause of action for victims who suffered sexual abuse or sexual exploitation as children, to enable such victims of to file suit against online platforms and app stores that intentionally, knowingly, recklessly, or negligently promote or facilitate online child sexual exploitation. Victims are able to recover actual damages or liquidated damages in the amount of $150,000, as well as punitive damages and equitable relief. This provision does not apply to actions taken by online platforms to comply with valid legal process or statutory requirements. The legislation specifies that such causes of action are not barred by section 230 of the Communications Act of 1934 (47 U.S.C. 230).
Now, some will argue this shouldn’t have a huge impact on big companies that do the right thing because it’s only for those that “intentionally, knowingly, recklessly, or negligently promote or facilitate” but that’s actually a much, much bigger loophole than it might sound at first glance.
First, we’ve already seen companies that take reporting seriously, such as Reddit and Twitter, get hit with lawsuits making these kinds of allegations. So, plaintiffs’ lawyers are going to pile on lawsuits even against the companies that are trying to do their best on this stuff.
Second, even if the sites were doing everything right, now they have to go through the long and arduous process of proving that in every one of these lawsuits. The benefit of Section 230 is to get cases like this kicked out early. Without 230, you have to go through a long and involved process just to prove that you didn’t “intentionally, knowingly, recklessly, or negligently” do any of those things.
Third, while “intentionally” and “knowingly” are perhaps more defensible, adding in “recklessly” and (even worse) “negligently” again just makes every lawsuit a massive crapshoot, because every lawyer is going to argue that any site that doesn’t catch and stop every bit of CSAM will be acting “negligently.” And the lawsuits over negligently are going to be massive and going to be ridiculous and going to be expensive.
So, if you’re a social media site — say a mid-sized Mastodon instance — and it’s discovered that someone posted CSAM to your site, the victimized individual can sue, and insist that you were negligent in not catching it, even if you were actually good about reporting and removing CSAM.
Basically, this opens up a flood of litigation.
There may also be some concerns about some of the new reporting requirements, in that I fear that (like this very bill misuses the “reported” stats as proof that the problem is growing) the new reports will be used down the line to justify more draconian interventions just because “the numbers” are going up, when that might just be a result of the reporting itself. I also worry that some of the reporting requirements will lead to further (sketchy) justifications for future attacks on encryption.
Again, this bill has elements that seems good, and would be useful contributions. But the Section 230 carveout is extremely problematic, and it’s not at all clear that it would actually help anyone other than plaintiffs lawyers filing a ton of vexatious lawsuits.
On top of all that Durbin’s floor speech on introducing the bill was, well, problematic full of moral panic nonsense mostly disconnected from reality and he goes hard against Section 230, though it’s not clear he understands it at all. Even worse he talks about how EARN It and STOP CSAM together would lead to a bunch of frivolous lawsuits, which he seems to think is a good thing.
How can this be, you ask? Here is how. The Communications Decency Act of 1996—remember that year—contains a section, section 230, that offers near-total immunity to Big Tech. As a result, victims like Charlotte have no way to force tech companies to remove content posted on their sites—not even these child sexual abuse horrible images.
My bill, the Stop CSAM Act, is going to change that. It would protect victims and promote accountability within the tech industry. Companies that fail to remove CSAM and related imagery after being notified about them would face significant fines. They would also be required to produce annual reports detailing their efforts to keep children safe from online sex predators, and any company that promotes or facilitates online child exploitation could face new criminal and civil penalties.
When section 230 was created in 1996, Mark Zuckerberg was in the sixth grade. Facebook and social media sites didn’t even exist. It is time that we rewrite the law to reflect the reality of today’s world.
A bipartisan bill sponsored by Senators Graham and Blumenthal would also help to do that. It is called the EARN IT Act, and it would let CSAM victims—these child sexual abuse victims—have their day in court by amending section 230 to eliminate Big Tech’s near-total immunity from liability and responsibility.
There are serious ways to fight CSAM. But creating massive liability risks and frivolous lawsuits that misunderstand the problem, and don’t even deal with the fact that sites already report all this content only to see it disappear into a blackhole without law enforcement doing anything… does not help solve the problem at all.
Filed Under: csam, cybertipline, dick durbin, reporting, section 230, stop csam act, transparency
Pretty Much Every Expert Agrees That Elon Has Made Twitter’s Child Sexual Abuse Problem Worse
from the not-great,-bob dept
About a month ago, we wrote an article pulling together a variety of sources, including an NBC News investigation, that suggested that Elon Musk’s Twitter was doing a terrible job dealing with child sexual abuse material (CSAM) on the platform. This was contrary to the claims of a few very vocal Elon supporters, including one who somehow got an evidence-free article published in a major news publication, insisting that he had magically “solved” the CSAM issue, despite firing most of the people who worked on it. As we noted last month, it actually appeared that Elon’s Twitter was not just failing to deal with CSAM (which is a massive challenge on any platform), but that he was actually going backwards and making the issue much, much worse.
Last week, Senator Dick Durbin released a letter he sent to the Attorney General, asking the DOJ to investigate Twitter’s failures at stopping CSAM.
I write to express my grave concern that Twitter is failing to prevent the selling and trading of child sexual abuse material (CSAM) on its platform and to urge the Department of Justice (DOJ) to take all appropriate actions to investigate, deter, and stop this activity, which has no protections under the First Amendment, and violates federal criminal law.
The last two points are important: CSAM is (obviously) not protected speech, and as it violates federal criminal law, Section 230 is not relevant (lots of 230 haters seem to forget this important point). Of course, there is still the issue of knowledge. You still can’t hold a platform liable for things it didn’t know about. But, deliberately turning a blind eye to CSAM (while stating publicly that it was the number one priority) is still really bad.
Now, a NY Times investigation has gone much, much further into this issue and found, as NBC News did, that Twitter isn’t just failing to deal with CSAM, it has made a ton of really, really questionable decisions regarding how it handles the problem. The NY Times report notes that it used some tools to investigate CSAM on Twitter without looking at the material itself. While it doesn’t go into detail, from what’s stated, it sounds like wrote some software to identify potential CSAM, without looking at it, and then forwarded the accounts to the Canadian Center for Child Protection and also to Microsoft, which created and runs PhotoDNA, the tool that many large companies use to identify CSAM on platforms and to report that content to NCMEC (the National Center for Missing and Exploited Children) in the US and the CCCP in Canada (and other organizations elsewhere). And what they found is not great:
To assess the company’s claims of progress, The Times created an individual Twitter account and wrote an automated computer program that could scour the platform for the content without displaying the actual images, which are illegal to view. The material wasn’t difficult to find. In fact, Twitter helped promote it through its recommendation algorithm — a feature that suggests accounts to follow based on user activity.
Among the recommendations was an account that featured a profile picture of a shirtless boy. The child in the photo is a known victim of sexual abuse, according to the Canadian Center for Child Protection, which helped identify exploitative material on the platform for The Times by matching it against a database of previously identified imagery.
That same user followed other suspicious accounts, including one that had “liked” a video of boys sexually assaulting another boy. By Jan. 19, the video, which had been on Twitter for more than a month, had gotten more than 122,000 views, nearly 300 retweets and more than 2,600 likes. Twitter later removed the video after the Canadian center flagged it for the company.
Even Twitter’s responses to requests from the government agencies dealing with this stuff did not go well:
One account in late December offered a discounted “Christmas pack” of photos and videos. That user tweeted a partly obscured image of a child who had been abused from about age 8 through adolescence. Twitter took down the post five days later, but only after the Canadian center sent the company repeated notices.
As an aside, I’m curious how all the people insisting that no government agency should ever alert Twitter to content that might be illegal or violate its policies feel about the Canadian Center alerting Twitter to CSAM on its platform.
And the article notes that Twitter seems to be ignoring a lot of the more easily findable stuff for organizations that have access to these types of tools:
The center also did a broader scan against the most explicit videos in their database. There were more than 260 hits, with more than 174,000 likes and 63,000 retweets.
“The volume we’re able to find with a minimal amount of effort is quite significant,” said Lloyd Richardson, the technology director at the Canadian center. “It shouldn’t be the job of external people to find this sort of content sitting on their system.”
Even more worrisome: the NY Times report notes that Twitter uses a tool from Thorn, the well known anti-trafficking organization that tries to use technology to fight trafficking. Except the report notes that, for all of Musk’s claims about how fighting this stuff is job number one… he stopped paying Thorn. But, even more damning, Twitter has stopped working with Thorn to provide information back to the organization to improve its tool and to help it find and stop more CSAM:
To find the material, Twitter relies on software created by an anti-trafficking organization called Thorn. Twitter has not paid the organization since Mr. Musk took over, according to people familiar with the relationship, presumably part of his larger effort to cut costs. Twitter has also stopped working with Thorn to improve the technology. The collaboration had industrywide benefits because other companies use the software.
Also eye-opening in the article is that, while Twitter is claiming that it is removing more such content than ever, its reports to NCMEC do not match that and have dropped massively, raising serious concerns at NCMEC:
The company has not reported to the national center the hundreds of thousands of accounts it has suspended because the rules require that they “have high confidence that the person is knowingly transmitting” the illegal imagery and those accounts did not meet that threshold, Ms. Irwin said.
Mr. Shehan of the national center disputed that interpretation of the rules, noting that tech companies are also legally required to report users even if they only claim to sell or solicit the material. So far, the national center’s data show, Twitter has made about 8,000 reports monthly, a small fraction of the accounts it has suspended.
Also, NCMEC saw that Twitter’s responsiveness dwindled (though in January seemed to pick back up a bit):
After the transition to Mr. Musk’s ownership, Twitter initially reacted more slowly to the center’s notifications of sexual abuse content, according to data from the center, a delay of great importance to abuse survivors, who are revictimized with every new post. Twitter, like other social media sites, has a two-way relationship with the center. The site notifies the center (which can then notify law enforcement) when it is made aware of illegal content. And when the center learns of illegal content on Twitter, it alerts the site so the images and accounts can be removed.
Late last year, the company’s response time was more than double what it had been during the same period a year earlier under the prior ownership, even though the center sent it fewer alerts. In December 2021, Twitter took an average of 1.6 days to respond to 98 notices; last December, after Mr. Musk took over the company, it took 3.5 days to respond to 55. By January, it had greatly improved, taking 1.3 days to respond to 82.
The Canadian center, which serves the same function in that country, said it had seen delays as long as a week. In one instance, the Canadian center detected a video on Jan. 6 depicting the abuse of a naked girl, age 8 to 10. The organization said it sent out daily notices for about a week before Twitter removed the video.
None of this is particularly encouraging, especially on a topic so important.
It also appears that foreign regulators may be taking notice as well:
Ms. Inman Grant, the Australian regulator, said she had been unable to communicate with local representatives of the company because her agency’s contacts in Australia had quit or been fired since Mr. Musk took over. She feared that the staff reductions could lead to more trafficking in exploitative imagery.
“These local contacts play a vital role in addressing time-sensitive matters,” said Ms. Inman Grant, who was previously a safety executive at both Twitter and Microsoft.
Again, dealing with CSAM is one of the most critical, and challenging, parts of any trust & safety team for any website that allows user content. There is no “perfect” solution. And there will always be scenarios where some content is missed. So, in general, I’ve been hesitant to highlight articles (which come along with some frequency) insisting that because reporters or researchers are able to find some CSAM it means that the site “isn’t doing enough.” Because that’s rarely an accurate portrayal.
However, this NY Times piece goes way beyond that. It didn’t just find content, it found empirical evidence of Twitter being slower to react than in the past, not reporting the material it should be reporting to the agencies set up for that purpose, cutting off Thorn from both money and collaboration data, and many other things.
All of which adds up to pretty compelling evidence that for all of Musk’s lofty talk of fighting CSAM being job number one, the company has actually gone not just a little backwards on this issue, but dangerously so.
Filed Under: csam, dick durbin, ella irwin, elon musk
Companies: thorn, twitter
Sen. Dick Durbin: Journalists Deserve Protection But We'll Decide Who's Actually A Journalist
from the trade-your-laptop-in-for-a-notepad-for-extra-journo-cred dept
Illinois Senator Dick Durbin has penned an editorial for the Chicago Sun-Times in which he argues that journalists need some form of government-granted protection, but that the government should decide who is a real journalist and who isn’t.
As he points out, there is currently no national “shield” law that protects journalists and their sources, although a bill along those lines is slowly making its way through the system. Durbin seems to feel a great many people should be excluded from this protection, though — possibly for no other reason than the platform used.
The media informs the public and holds government accountable. Journalists should have reasonable legal protections to do their important work. But not every blogger, tweeter or Facebook user is a “journalist.” While social media allows tens of millions of people to share information publicly, it does not entitle them to special legal protections to ignore requests for documents or information from grand juries, judges or other law enforcement personnel.
There’s your new have-nots, if Durbin’s deciding. Here’s the list of who Durbin feels actually deserves the “journalist” label and its associated protections.
A journalist gathers information for a media outlet that disseminates the information through a broadly defined “medium” — including newspaper, nonfiction book, wire service, magazine, news website, television, radio or motion picture — for public use. This broad definition covers every form of legitimate journalism.
The internet: illegitimate journalism. Journalism isn’t a static object with a single definition, it’s something people do, with or without the title, and the dissemination of these endeavors spans many platforms. While there are a lot of old school journalism outlets listed, Durbin also includes “news website,” which covers a whole lot of gray area (Buzzfeed? TMZ? Vice?). Without further details, it would appear a “news website” will probably have to be anchored by one of the other “time-honored” journalism outlets.
If a newspaper journalist writes a blog on the side or maintains a Twitter account, are those sidelines protected because of his or her position, or is it only what appears on the printed page/associated news website? Or conversely, if someone’s journalism efforts are mainly relegated to platforms not covered by Durbin’s list but occasionally contribute to “legitimate journalism,” does that cover the non-associated online work as well? No matter how these instances play out, “journalism” is being defined by media form rather than by the activity itself. While the government should recognize freedom of the press and grant protection to journalists, it becomes problematic when the definition is narrowed to pre-existing forms that don’t truly reflect journalism as it exists today.
Durbin says that those who think the government shouldn’t be able to define journalism need to be reminded that 49 states already do just that. That doesn’t make these definitions better or more acceptable and certainly shouldn’t be taken as some sort of tacit permission for the federal government to define what media forms it will protect and which it won’t.
He goes on to cite recent events as evidence this protection is needed.
The leaks of classified information about the NSA’s surveillance operations and an ongoing Justice Department investigation into who disclosed secret documents to the Associated Press have brought this issue back to the forefront and raised important questions about the freedom of speech, freedom of the press and how our nation defines journalism.
Journalists should certainly be shielded from those who think they should be prosecuted for exposing leaked documents. But this administration isn’t interested in protecting whistleblowers and, if it wasn’t running up against existing “freedom of the press protections,” would probably be punishing journalists as well. Allowing the government to pick and choose who is protected will likely result in a large number of unprotected journalists, thanks to an inadequate definition. And even this additional protection is unlikely to prevent entities like the DOJ from violating the Fourth Amendment in a search for sources and whistleblowers. If you’re already violating civil liberties, breaking a law isn’t much of a concern.
Filed Under: dick durbin, first amendment, journalism, shield laws
US Senators Propose Bill To Censor Any Sites The Justice Depatement Declares 'Pirate' Sites, Worldwide
from the like-youtube-and-scribd? dept
The entertainment industry’s favorite two Senators, Patrick Leahy (who keeps proposing stronger copyright laws) and Orin Hatch (who once proposed automatically destroying the computers of anyone caught file sharing) have now proposed a new law that would give the Justice Department the power to shut down websites that are declared as being “dedicated to illegal file sharing.” Other Senators signed on to sponsor the bill are: Sens. Herb Kohl, Arlen Specter, Charles Schumer, Dick Durbin, Sheldon Whitehouse, Amy Klobuchar, Evan Bayh and George Voinovich. Perhaps these Senators should brush up on their history.
They do realize, of course, that Hollywood (who is pushing them for this law) was established originally as a “pirate” venture to get away from Thomas Edison and his patents, right? Things change over time. Remember that YouTube, which is now considered by Hollywood to be mostly “legit,” had been derided as a “site dedicated” to “piracy” in the past. It’s no surprise that the Justice Department — with a bunch of former RIAA/MPAA lawyers on staff — would love to have such powers, but it’s difficult to see how such a law would be Constitutional, let alone reasonable. And finally, we must ask, yet again, why the US federal government is getting involved in what is, clearly, a civil business model issue? The Senators quote the already debunked US Chamber of Commerce reports on the “harm” of intellectual property — which just shows how intellectually dishonest they’re being. They’re willing to base a censorship law on debunked data.
Oh, and even worse, this proposed law is supposed to cover sites worldwide, not just in the US. For a country that just passed a libel tourism law to protect Americans from foreign judgments, it’s a bit ridiculous that we’re now trying to reach beyond our borders to shut down sites that may be perfectly legal elsewhere. The way that the law, called the “Combating Online Infringement and Counterfeits Act,” would work is that the Justice Department could ask a court to declare a site as a “pirate” site and then get an injunction that would force the domain registrar or registry to no longer resolve that domain name.
It’s difficult to see how this is anything other than a blatant censorship law. I can’t see how it passes an even simple First Amendment sniff test. It’s really quite sickening to see US Senators propose a law that is nothing less than censorship, designed to favor some of their biggest donors in the entertainment industry, who refuse to update their own business models.
Filed Under: amy klobuchar, arlen specter, censorship, chuck schumer, copyright, dick durbin, evan bayh, file sharing, free speech, george voinovich, orin hatch, patrick leahy, sheldon whitehouse