stop csam – Techdirt (original) (raw)
Our Online Child Abuse Reporting System Is Overwhelmed, Because The Incentives Are Screwed Up & No One Seems To Be Able To Fix Them
from the mismatched-incentives-are-the-root-of-all-problems dept
The system meant to stop online child exploitation is failing — and misaligned incentives are to blame. Unfortunately, today’s political solutions, like KOSA and STOP CSAM, don’t even begin to grapple with any of this. Instead, they prefer to put in place solutions that could make the incentives even worse.
The Stanford Internet Observatory has spent the last few months doing a very deep dive on how the CyberTipline works (and where it struggles). It has released a big and important report detailing its findings. In writing up this post about it, I kept adding more and more, to the point that I finally decided it made sense to split it up into two separate posts to keep things manageable.
This first post covers the higher level issue: what the system is, why it works the way it does, and how the incentive structure of the system is completely messed up (even if it was done with good intentions), and how that’s contributed to the problem. A follow-up post will cover the more specific challenges facing NCMEC itself, law enforcement, and the internet platforms themselves (who often take the blame for CSAM, when that seems extremely misguided).
There is a lot of misinformation out there about the best way to fight and stop the creation and spread of child sexual abuse material (CSAM). It’s unfortunate because it’s a very real and very serious problem. Yet the discussion about it is often so disconnected from reality as to be not just unhelpful, but potentially harmful.
In the US, the system that was set up is the CyberTipline, which is run by NCMEC, the National Center on Missing and Exploited Children. It’s a private, non-profit; however, it has a close connection with the US government, which helped create it. At times, there has been some confusion about whether or not NCMEC is a government agent. The entire setup of it was designed to keep it as non-governmental, to avoid any 4th Amendment issues with the information it collects, but courts haven’t always seen it that way, which makes it tricky (even as the 4th Amendment is important).
And while the system was designed for the US, it has become a defacto global system, since so many of the companies are US based, and NCMEC will, when it can, send relevant details to foreign law enforcement as well (though, as the report details, that doesn’t always work well).
The main role CyberTipline has taken on is coordination. It takes in reports of CSAM (mostly, but not entirely, from internet platforms) and then, when relevant, hands off the necessary details to the (hopefully) correct law enforcement agency to handle things.
Companies that host user-generated content have certain legal requirements to report CSAM to the CyberTipline. As we discussed in a recent podcast, this role as a “mandatory reporter” is important in providing useful information to allow law enforcement to step in and actually stop abusive behavior. Because of the “government agent” issue, it would be unconstitutional to require social media platforms to proactively search for and identify CSAM (though many do use tools to do this). However, if they do find some, they must report it.
Unfortunately, the mandatory reporting has also allowed the media and politicians to use the number of reports sent in by social media companies in a misleading manner, suggesting that the mere fact that these companies find and report to NCMEC means that they’re not doing enough to stop CSAM on their platforms.
This is problematic because it creates a dangerous incentive, suggesting that internet services should actually not report CSAM they found, as politicians and the media will falsely portray a lot of reports as being a sign of a failure by the platforms to take this seriously. The reality is that the failure to take things seriously comes from the small number of platforms (Hi Telegram!) who don’t report CSAM at all.
Some of us from the outside have thought that the real issue was that NCMEC and law enforcement had been unsuccessful on the receiving end to take those reports and do enough that was productive with them. It seemed convenient for the media and politicians to just blame social media companies for doing what they’re supposed to do (reporting CSAM), ignoring that what happened on the back end of the system might be the real problem. That’s why things like Senator Ron Wyden’s Invest in Child Safety Act seemed like a better approach than things like KOSA or the STOP CSAM Act.
That’s because the approach of KOSA/STOP CSAM and some other bills is basically to add liability to social media companies. (These companies already do a ton to prevent CSAM from appearing on the platform and alert law enforcement via the CyberTipline when they do find stuff.) But that’s useless if those receiving the reports aren’t able to do much with them.
What becomes clear from this report is that while there are absolutely failures on the law enforcement side, some of that is effectively baked into the incentive structure of the system.
In short, the report shows that the CyberTipline is very helpful in engaging law enforcement to stop some child sexual abuse, but it’s not as helpful as it might otherwise be:
Estimates of how many CyberTipline reports lead to arrests in the U.S. range from 5% to 7.6%
This number may sound low, but I’ve been told it’s not as bad as it sounds. First of all, when a large number of the reports are for content that is overseas and not in the US, it’s more difficult for law enforcement here to do much about it (though, again, the report details some suggestions on how to improve this). Second, some of the content may be very old, where the victim was identified years (or even decades) ago, and where there’s less that law enforcement can do today. Third, there is a question of prioritization, with it being a higher priority to target those directly abusing children. But, still, as the report notes, almost everyone thinks that the arrest number could go higher if there were more resources in place:
Empirically, it is unknown what percent of reports, if fully investigated, would lead to the discovery of a person conducting hands-on abuse of a child. On the one hand, as an employee of a U.S. federal department said, “Not all tips need to lead to prosecution […] it’s like a 911 system.”10 On the other hand, there is a sense from our respondents—who hold a wide array of beliefs about law enforcement—that this number should be higher. There is a perception that more than 5% of reports, if fully investigated, would lead to the discovery of hands-on abuse.
The report definitely suggests that if NCMEC had more resources dedicated to the CyberTipline, it could be more effective:
NCMEC has faced challenges in rapidly implementing technological improvements that would aid law enforcement in triage. NCMEC faces resource constraints that impact salaries, leading to difficulties in retaining personnel who are often poached by industry trust and safety teams.
There appear to be opportunities to enrich CyberTipline reports with external data that could help law enforcement more accurately triage tips, but NCMEC lacks sufficient technical staff to implement these infrastructure improvements in a timely manner. Data privacy concerns also affect the speed of this work.
But, before we get into the specific areas where things can be improved in the follow-up post, I thought it was important to highlight how the incentives of this system contribute to the problem, where there isn’t necessarily an easy solution.
While companies (Meta, mainly, since it represents, by a very wide margin, the largest number of reports to the CyberTipline) keep getting blamed for failing to stop CSAM because of its large number of reports, most companies have very strong incentives to report anything they find. This is because the cost for not reporting something they should have reported is massive (criminal penalties), whereas the cost for over-reporting is nothing to the companies. That means, there’s an issue with overreporting.
Of course, there is a real cost here. CyberTipline employees get overwhelmed, and that can mean that reports that should get prioritized and passed on to law enforcement don’t. So you can argue that while the cost of over-reporting is “nothing” to the companies, the cost to victims and society at large can be quite large.
That’s an important mismatch.
But the broken incentives go further as well. When NCMEC hands off reports to law enforcement, they often go through a local ICAC (Internet Crimes Against Children) task force, who will help triage it and find the right state or local law enforcement agency to handle the report. Different law enforcement agencies who are “affiliated” with ICACs receive special training on how to handle reports from the CyberTipline. But, apparently, at least some of them feel that it’s just too much work, or (in some cases) too burdensome to investigate. That means that some law enforcement agencies are choosing not to affiliate with their local ICACs to avoid this added work. Even worse, some law enforcement agencies have “unaffiliated” themselves with the local ICAC because they just don’t want to deal with it.
In some cases, there are even reports of law enforcement unaffiliating with an ICAC out of a fear of facing liability for not investigating an abused child quickly enough.
A former Task Force officer described the barriers to training more local Task Force affiliates. In some cases local law enforcement perceive that becoming a Task Force affiliate is expensive, but in fact the training is free. In other cases local law enforcement are hesitant to become a Task Force affiliate because they will be sent CyberTipline reports to investigate, and they may already feel like they have enough on their plate. Still other Task Force affiliates may choose to unaffiliate, perceiving that the CyberTipline reports they were previously investigating will still get investigated at the Task Force, which further burdens the Task Force. Unaffiliating may also reduce fear of liability for failing to promptly investigate a report that would have led to the discovery of a child actively being abused, but the alternative is that the report may never be investigated at all.
[….]
This liability fear stems from a case where six months lapsed between the regional Task Force receiving NCMEC’s report and the city’s police department arresting a suspect (the abused children’s foster parent). In the interim, neither of the law enforcement agencies notified child protective services about the abuse as required by state law. The resulting lawsuit against the two police departments and the state was settled for $10.5 million. Rather than face expensive liability for failing to prioritize CyberTipline reports ahead of all other open cases, even homicide or missing children, the agency might instead opt to unaffiliate from the ICAC Task Force.
This is… infuriating. Cops choosing to not affiliate (i.e., get the necessary training to help) or removing themselves from an ICAC task force because they’re afraid if they don’t help save kids from abuse that they might get sued is ridiculous. It’s yet another example of cops running away, rather than doing the job they’re supposed to be doing, but which they claim they have no obligation to do.
That’s just one problem of many in the report, which we’ll get into in the second post. But, on the whole, it seems pretty clear that with the incentives this out of whack, something like KOSA or STOP CSAM aren’t going to be of much help. Actually tackling the underlying issues, the funding, the technology, and (most of all) the incentive structures, is necessary.
Filed Under: csam, cybertipline, icac, incentives, kosa, law enforcement, liability, stop csam
Companies: ncmec
Senator Durbin Petulantly Promises To Destroy The Open Internet If He Doesn’t Get His Bad ‘Save The Children’ Internet Bill Passed
from the must-we-do-this-again? dept
Last week, we wrote about Senator Dick Durbin going on the floor of the Senate and spreading some absolute nonsense about Section 230 as he continued to push his STOP CSAM Act. His bill has some good ideas mixed in with some absolutely terrible ideas. In particular, the current language of the bill is a direct attack on encryption (though we’re told that there are other versions floating around). The methods by which it does so is in removing Section 230, enabling people to sue websites if they “intentionally, knowingly, recklessly, or negligently” host CSAM or “promote or facilitate” child sexual exploitation.
Now, sure, go after sites that intentionally and knowingly host CSAM. That seems easy enough (and is already illegal under federal law and not blocked by Section 230). But, the fear is that using encryption could be seen as “facilitating” exploitation, and thus the offering of encrypted communications absolutely will be used as plaintiffs to file vexatious lawsuits against websites.
And rather than fixing the bill, Senator Durbin says he’ll push for a full repeal of Section 230 if Congress won’t pass his problematic bill (for what it’s worth, this is the same thing his colleague Lindsey Graham has been pushing for, and it looks like Graham has looped Durbin into this very dumb plan):
If Congress doesn’t approve kids’ online safety legislation, then it should repeal Communications Decency Act Section 230, Senate Judiciary Committee Chairman Dick Durbin, D-Ill., told us last week.
Ranking member Lindsey Graham, R-S.C., is seeking Durbin’s support for legislation… that would repeal Section 230, the tech industry’s shield against liability for hosting third-party content on platforms. Durbin told us he will see what happens with Judiciary-approved legislation on kids’ safety. “If we can’t make the changes with the bills we’ve already passed, 230 has to go,” Durbin said.
Durbin has already made it clear that he does not understand how Section 230 itself works. Last week, on the floor of the Senate, he ranted misleadingly about it while pushing for unanimous consent for STOP CSAM. He starts off with a tearjerker of a story about parents who lost children to terrible people online. But rather than blaming the terrible people, he seems to think that social media companies should wave a magic wand and magically stop bad people:
The emotion I witnessed during that hearing in the faces of survivors, parents, and family members were unforgettable. There were parents who lost their children to that little to the telephone that they were watching day in and day out.
They committed suicide at the instruction of some crazy person on the internet.
There were children there that had grown up into adults still haunted by the images that they shared with some stranger on that little telephone years and years ago.
So, first of all, as I’ve noted before, it is beyond cynical and beyond dangerous to blame someone’s death by suicide on any other person when no one knows for sure the real reason for taking that permanent, drastic step except the person who did it.
But, second, if someone is to blame, it is that “crazy person on the internet.” What Durbin leaves out is the most obvious question: was anything done to that “crazy person on the internet”?
And you think to yourself? Well, why didn’t they step up and say something? If those images are coming up on the Internet? Why don’t they do something about it? Why don’t they go to the social media site? And in many and most instances they did. And nothing happened and that’s a reason why we need this legislation.
So, a few things here: first off, his legislation is about STOP CSAM, yet he was talking about suicide. Those are… different things with different challenges? Second, the details absolutely matter here. If it is about CSAM, or even non-consensual intimate imagery (in most cases), every major platform already has a program to do so.
You can find the pages for Google, Meta, Microsoft and more to remove such content. And there are organizations like StopNCII that are very successful in removing such content as well.
If it’s actual CSAM, that’s already very illegal, and companies will remove it as soon as they find out about it. So Durbin’s claims don’t pass the sniff test, and suggest something else was going on in the situations he’s talking about, not evidence of the need for his legislation.
We say… STOP CSAM Act says, we’ll allow survivors to child online sexual exploitation to sue the tech companies that have knowingly and intentionally facilitated the exploitation.
Again, which platforms are not actually already doing that?
In other words one young woman told the story. She shared an image of herself an embarrassing image of herself that haunted her for decades afterwards. She went to the website. That was that was displaying this and told them this is something I want to take down. It is embarrassment to me. It happened when I was a little girl and still I’m living with it even today. They knew that it was on this website because this young woman and her family proved it, and yet they did nothing, nothing let him continue to play this exploitation over and over again.
Why how to get away with that they asked, and many people asked, I thought we had laws in this country protecting children what’s going on? Well, there’s a Section 230 which basically absolves these companies these media companies from responsibility for what is displayed on their websites on their social media pages. And that’s exactly what we change here.
Again, none of this makes any sense. If the imagery was actually CSAM, then that’s very much illegal and Section 230 has nothing to do with it. Durbin should then be asking why the DOJ isn’t taking action.
From the vague and non-specific description again, it sounds like this wasn’t actually CSAM, but rather simply “embarrassing” content. But “embarrassing” content is not against the law, and thus, this law still wouldn’t make any difference at all, because the content was legal.
So what situation does this law actually solve for? It’s not one involving Section 230 at all.
We say something basic and fundamental. If the social media site knowingly and intentionally continued to display these images, they’re subject to civil liability. They can be sued. Want to change this scene in a hurry? Turn the lawyers loose on them. Let them try to explain why they have no responsibility to that young woman who’s been exploited for decades. That’s what my bill works on. I’m happy to have co-sponsorship with Senator Graham and others. We believe that these bills this package of bill should come to the floor today.
Again, if it’s actually CSAM then it’s a criminal issue and the responsibility is on law enforcement. Why isn’t Durbin asking why law enforcement did nothing? Furthermore, all the major companies will report actual CSAM to NCMEC’s cybertip line, and most, if not all, of them will use some form of Microsoft’s PhotoDNA to identify repeats of the content.
So, if it’s true that this young woman had exploitative imagery being passed around, as Durbin claims, it sounds like either (1) it wasn’t actually illegal, in which case this bill would do nothing, or (2) there was a real failing of law enforcement and/or by NCMEC and PhotoDNA. It’s not at all clear how “turning the lawyers loose” for civil lawsuits fixes anything about that issue.
Again, Durbin seems to wholly misunderstand Section 230, issues related to CSAM, and how modern internet companies work. It’s not even clear from his speech that he understands the various issues. He switches at times from talk of suicide to embarrassing imagery to CSAM, without noting the fairly big differences between them all.
And now he wants to get rid of Section 230 entirely? Why?
The Communications Daily story about Durbin’s plans also has some ridiculous commentary from other senators, including Richard Blumenthal, who never misses an opportunity to be the wrongest senator about the internet.
Passing kids’ online safety legislation is more realistic than obtaining a Section 230 repeal, Senate Privacy Subcommittee Chairman Richard Blumenthal, D-Conn., told us in response to Graham’s plans. Blumenthal introduced the Kids Online Safety Act with Sen. Marsha Blackburn, R-Tenn., …“Passing a repeal of Section 230, which I strongly favor, is far more problematic than passing the Kids Online Safety Act (KOSA), which has almost two-thirds of the Senate sponsoring,” said Blumenthal. “I will support repealing Section 230, but I think the more viable path to protecting children, as a first step, is to pass the Kids Online Safety Act.”
Of course Blumenthal hates 230 and wants it repealed. He’s never understood the internet. This goes all the way back to when he was Attorney General of Connecticut. He thought that he should be able to sue Craigslist for prostitution and blamed Section 230 for not letting him do so.
There are other dumb 230 quotes from others, including Chuck Grassley and Ben Ray Lujan (who is usually better than that), but the dumbest of all goes to Senator Marco Rubio:
Section 230 immunity hinges on the question of how much tech platforms are controlling editorial discretion, Senate Intelligence Committee ranking member Marco Rubio, R-Fla., told us. “Are these people forums or are they exercising editorial controls that would make them publishers?” he said. “I think there are very strong arguments that they’re exercising editorial control.”
I know that a bunch of very silly people are convinced this is how Section 230 works, but it’s the opposite of this. The entire point of Section 230 is that it protects websites from liability for their editorial decision making. That’s it. That’s why 230 was passed. There is no “exercising editorial control” loophole that makes Section 230 not apply because the entire point of the law was to enable websites to feel free to exercise editorial control to create communities they wanted to support.
Rubio should know this, but so should the reporter for Communications Daily, Karl Herchenroeder, who wrote the above paragraph as if it was accurate, rather than completely backwards. Section 230 does not “hinge” on “how much tech platforms are controlling editorial discretion.” It hinges on “is this an interactive computer service or a user of such a service” and “is the content created by someone else.” That’s it. That’s the analysis. Editorial discretion has fuck all to do with it. And we’ve known this for decades. Anyone saying otherwise is ignorant or lying.
In the year 2024, it is beyond ridiculous that so many senators do not understand Section 230 and just keep misrepresenting it, to the point of wishing to repeal it (and with it, the open internet).
Filed Under: ben ray lujan, chuck grassley, dick durbin, lindsey graham, marco rubio, richard blumenthal, section 230, stop csam
Once Again, Google Caves To Political Pressure And Supports Questionable STOP CSAM Law
from the playing-political-games dept
It’s not surprising, but still disappointing, to see companies like Google and Meta, which used to take strong stands against bad laws, now showing a repeated willingness to cave on such principles in the interests of appeasing policymakers. It’s been happening a lot in the last few years and it’s happened again as Google has come out (on ExTwitter of all places) to express support for a mixed batch of “child safety” bills.
If you can’t see that screenshot, they are tweets from the Google Public Policy team, stating:
Protecting kids online is a top priority—and demands both strong legislation and responsible corporate practices to make sure we get it right.
We support several important bipartisan bills focused on online child safety, including the Invest in Child Safety Act, the Project Safe Childhood Act, the Report Act, the Shield Act, and the STOP CSAM Act.
We’ve talked about a couple of these bills. The Invest in Child Safety Act seems like a good one, from Senator Ron Wyden, as it focuses the issue where it belongs: on law enforcement. That is, rather than blaming internet companies for not magically stopping criminals, it equips law enforcement to better do its job.
The Shield Act is about stopping the sharing of nonconsensual sexual images and seems mostly fine, though I’ve seen a few concerns raised on the margins about how some of the language might go too far in criminalizing activities that shouldn’t be criminal. According to Senator Cory Booker last week, he’s been working with Senator Klobuchar on fixing those problematic parts.
And the Project Safe Childhood Act also seems perfectly fine. In many ways it complements the Invest in Child Safety Act in that it’s directed at law enforcement and focused on getting law enforcement to be better about dealing with child sexual abuse material, coordinating with other parts of law enforcement, and submitting seized imagery to NCMEC’s cybertip line.
But, then there’s the STOP CSAM bill. As we’ve discussed, there are some good ideas in that bill, but they’re mixed with some problematic ones. And, some of the problematic ones are a backdoor attack on encryption. Senator Dick Durbin, the author of the bill, went on a rant about Section 230 last week in trying to get the bill through on unanimous consent, which isn’t great either, and suggests some issues with the bill.
In that rant, he talks about how cell phones are killing kids because of “some crazy person on the internet.” But, um, if that’s true, it’s a law enforcement issue and “the crazy person on the internet” should face consequences. But Durbin insists that websites should somehow magically stop the “crazy person on the internet” from saying stuff. That’s a silly and mistargeted demand.
In that rant, he also talked about the importance of “turning the lawyers loose” on the big tech companies to sue them for what their users posted.
You’d think that that would be a reason for a company like Google to resist STOP CSAM, knowing it’ll face vexatious litigation. But, for some reason, it is now supporting the bill.
Lots of people have been saying that Durbin has a new, better version of STOP CSAM, and I’ve seen a couple drafts that are being passed around. But the current version of the bill still has many problems. Maybe Google is endorsing a fixed version of the bill, but if so, it sure would be nice if the rest of us could see it.
In the meantime, Durbin put out a gloating press release about Google’s support.
“For too long, Big Tech used every trick in the book to halt legislation holding social media companies accountable, while still trying to win the PR game. I’m glad to see that some tech companies are beginning to make good on their word to work with Congress on meaningful solutions to keep children safe online. I encourage other tech companies to follow Google’s move by recognizing that the time for Big Tech to police itself is over and work with Congress to better protect kids.”
Can’t say I understand Google’s reasons for caving here. I’m sure there’s some political calculus in doing so. And maybe they have the inside scoop on a fixed version of Durbin’s bill. But to do so the day after he talks about “turning the lawyers loose” on websites for failing to magically stop people from saying stuff… seems really strange.
It seems increasingly clear that both Meta and Google, with their buildings full of lawyers, have decided that the strategic political move is to embrace some of these laws, even as they know they’ll get hit with dumb lawsuits over them. They feel they can handle the lawsuits and, as a bonus, they know that smaller upstart competitors will probably have a harder time.
Still, there was a time when Google stood on principle and fought bad bills. That time seems to have passed.
Filed Under: dick durbin, encryption, liability, section 230, stop csam
Companies: google
Once Again, Ron Wyden Had To Stop Bad “Protect The Children” Internet Bills From Moving Forward
from the saving-the-internet dept
Senator Ron Wyden is a one-man defense for preventing horrible bills from moving forward in the Senate. Last month, he stopped Josh Hawley from moving a very problematic STOP CSAM bill from moving forward, and now he’s had to do it again.
A (bipartisan) group of senators traipsed to the Senate floor Wednesday evening. They tried to skip the line and quickly move some bad bills forward by asking for unanimous consent. Unless someone’s there to object, it effectively moves the bill forward, ending committee debate about it. Traditionally, this process is used for moving non-controversial bills, but lately it’s been used to grandstand about stupid bills.
Senator Lindsey Graham announced his intention to pull this kind of stunt on bills that he pretends are about “protecting the children” but which do no such thing in reality. Instead of it being just him, he rounded up a bunch of senators and they all pulled out the usual moral panic lines about two terrible bills: EARN IT and STOP CSAM. Both bills are designed to make it sound like good ideas and about protecting children, but the devil is very much in the detail, as both bills undermine end-to-end encryption while assuming that if you just put liability on websites, they’ll magically make child predators disappear.
And while both bills pretend not to attack encryption — and include some language about how they’re not intended to do so — both of them leave open the possibility that the use of end-to-end encryption will be used as evidence against websites for bad things done on those websites.
But, of course, as is the standard for the group of grandstanding senators, they present these bills as (1) perfect and (2) necessary to “protect the children.” The problem is that the bills are actually (1) ridiculously problematic and (2) will actually help bad people online in making end-to-end encryption a liability.
The bit of political theater kicked off with Graham having Senators Grassley, Cornyn, Durbin, Klobuchar, and Hawley talk on and on about the poor kids online. Notably, none of them really talked about how their bills worked (because that would reveal how the bills don’t really do what they pretend they do). Durbin whined about Section 230, misleadingly and mistakenly blaming it for the fact that bad people exist. Hawley did the thing that he loves doing, in which he does his mock “I’m a big bad Senator taking on those evil tech companies” schtick, while flat out lying about reality.
But Graham closed it out with the most misleading bit of all:
In 2024, here’s the state of play: the largest companies in America — social media outlets that make hundreds of billions of dollars a year — you can’t sue if they do damage to your family by using their product because of Section 230
This is a lie. It’s a flat out lie and Senator Graham and his staffers know this. All Section 230 says is that if there is content on these sites that violate the law, the liability goes after whoever created the content. If the features of the site itself “do damage,” then you can absolutely sue the company. But no one is actually complaining about the features. They’re complaining about content. And the liability on the content has to go to who created it.
The problem here is that Graham and all the other senators want to hold companies liable for the speech of users. And that is a very, very bad idea.
Now these platforms enrich our lives, but they destroy our lives.
These platforms are being used to bully children to death.
They’re being used to take sexual images and voluntarily and voluntarily obtain and sending them to the entire world. And there’s not a damn thing you can do about it. We had a lady come before the committee, a mother saying that her daughter was on a social media site that had an anti-bullying provisions. They complained three times about what was happening to her daughter. She killed herself. They went to court. They got kicked out by section 230.
I don’t know the details of this particular case, but first off, the platforms didn’t bully anyone. Other people did. Put the blame on the people actually causing the harm. Separately, and importantly, you can’t blame someone’s suicide on someone else when no one knows the real reasons. Otherwise, you actually encourage increased suicides, as it gives people an ultimate way to “get back” at someone.
Senator Wyden got up and, as he did last month, made it quite clear that we need to stop child sexual abuse and predators. He talked about his bill, which would actually help on these issues by giving law enforcement the resources it needs to go after the criminals, rather than the idea of the bills being pushed that simply blame social media companies for not magically making bad people disappear.
We’re talking about criminal issues, and Senator Wyden is looking to handle it by empowering law enforcement to deal with the criminals. Senators Graham, Durbin, Grassley, Cornyn, Klobuchar, and Hawley are looking to sue tech companies for not magically stopping criminals. One of those approaches makes sense for dealing with criminal activity. And yet it’s the other one that a bunch of senators have lined up behind.
And, of course, beyond the dangerous approach of EARN IT, it inherently undermines encryption, which makes kids (and everyone) less safe, as Wyden also pointed out.
Now, the specific reason I oppose EARN It is it will weaken the single strongest technology that protects children and families online. Something known as strong encryption.
It’s going to make it easier to punish sites that use encryption to secure private conversations and personal devices. This bill is designed to pressure communications and technology companies to scan users messages.
I, for one, don’t find that a particularly comforting idea.
Now, the sponsors of the bill have argued — and Senator Graham’s right, we’ve been talking about this a while — that their bills don’t harm encryption. And yet the bills allow courts to punish companies that offer strong encryption.
In fact, while it includes some they language about protecting encryption, it explicitly allows encryption to be used as evidence for various forms of liability. Prosecutors are going to be quick to argue that deploying encryption was evidence of a company’s negligence preventing the distribution of CSAM, for example.
The bill is also designed to encourage scanning of content on users phones or computers before information is sent over the Internet which has the same consequences as breaking encryption. That’s why a hundred civil society groups including the American Library Association — people then I think all of us have worked for — Human Rights Campaign, the list goes… Restore the Fourth. All of them oppose this bill because of its impact on essential security.
Weakening encryption is the single biggest gift you can give to these predators and these god-awful people who want to stalk and spy on kids. Sexual predators are gonna have a far easier time stealing photographs of kids, tracking their phones, and spying on their private messages once encryption is breached. It is very ironic that a bill that’s supposed to make kids safer would have the effect of threatening the privacy and security of all law-abiding Americans.
My alternative — and I want to be clear about this because I think Senator Graham has been sincere about saying that this is a horrible problem involving kids. We have a disagreement on the remedy. That’s what is at issue.
And what I want us to do is to focus our energy on giving law enforcement officials the tools they need to find and prosecute these monstrous criminals responsible for exploiting kids and spreading vile abuse materials online.
That can help prevent kids from becoming victims in the first place. So I have introduced to do this: the Invest in Child Safety Act to direct five billion dollars to do three specific things to deal with this very urgent problem.
Graham then gets up to respond and lies through his teeth:
There’s nothing in this bill about encryption. We say that this is not an encryption bill. The bill as written explicitly prohibits courts from treating encryption as an independent basis for liability.
We’re agnostic about that.
That’s not true. As Wyden said, the bill has some hand-wavey language about not treating encryption as an independent basis for liability, but it does explicitly allow for encryption to be one of the factors that can be used to show negligence by a platform, as long as you combine it with other factors.
Section (7)(A) is the hand-wavey bit saying you can’t use encryption as “an independent basis” to determine liability, but (7)(B) effectively wipes that out by saying nothing in that section about encryption “shall be construed to prohibit a court from considering evidence of actions or circumstances described in that subparagraph.” In other words, you just have to add a bit more, and then can say “and also, look, they use encryption!”
And another author of the bill, Senator Blumenthal, has flat out said that EARN IT is deliberately written to target encryption. He falsely claims that companies would “use encryption… as a ‘get out of jail free’ card.” So, Graham is lying when he says encryption isn’t a target of the bill. One of his co-authors on the bill admits otherwise.
Graham went on:
What we’re trying to do is hold these companies accountable by making sure they engage in best business practices. The EARN IT acts simply says for you to have liability protections, you have to prove that you’ve tried to protect children. You have to earn it. You’re just not given to you. You have to have the best business practices in place that voluntary commissions that lay out what would be the best way to harden these sites against sexually exploitation. If you do those things you get liability, it’s just not given to you forever. So this is not about encryption.
As to your idea. I’d love to talk to you about it. Let’s vote on both, but the bottom line here is there’s always a reason not to do anything that holds these people liable. That’s the bottom line. They’ll never agree to any bill that allows you to get them in court ever. If you’re waiting on these companies to give this body permission for the average person to sue you. It ain’t never going to happen.
So… all of that is wrong. First of all, the very original version of the EARN IT Act did have provisions to make company’s “earn” 230 protections by following best practices, but that’s been out of the bill for ages. The current version has no such thing.
The bill does set up a commission to create best practices, but (unlike the earlier versions of the bill) those best practice recommendations have no legal force or requirements. And there’s nothing in the bill that says if you follow them you get 230 protections, and if you don’t, you don’t.
Does Senator Graham even know which version of the bill he’s talking about?
Instead, the bill outright modifies Section 230 (before the Commission even researches best practices) and says that people can sue tech companies for the distribution of CSAM. This includes using the offering of encryption as evidence to support the claims that CSAM distribution was done because of “reckless” behavior by a platform.
Either Senator Graham doesn’t know what bill he’s talking about (even though it’s his own bill) or he doesn’t remember that he changed the bill to do something different than it used to try to do.
It’s ridiculous that Senator Wyden remains the only senator who sees this issue clearly and is willing to stand up and say so. He’s the only one who seems willing to block the bad bills while at the same time offering a bill that actually targets the criminals.
Filed Under: amy klobuchar, chuck grassley, csam, dick durbin, earn it, encryption, invest in child safety, john cornyn, josh hawley, lindsey graham, ron wyden, shield act, stop csam, unanimous consent
Josh Hawley Rages Ignorantly And Misleadingly In Trying To Push Encryption-Destroying STOP CSAM Bill
from the where-is-the-STOP-HAWLEY-act? dept
Every week it’s some other dumb thing going on in the Senate. On Tuesday Senator Josh Hawley went to the (mostly empty) Senate floor to “seek unanimous consent” for the STOP CSAM bill. That’s basically a process to rush the bill forward before it’s ready.
We’ve written about STOP CSAM before. Despite it’s name, it won’t actually “stop CSAM” but could do a lot of other harm, including effectively undermine end-to-end encryption by allowing plaintiffs to argue that companies that employ encryption are “intentionally, knowingly, recklessly, or negligently” enabling the sharing of child sexual abuse material.
Of course Hawley didn’t actually engage with any of the underlying bits about STOP CSAM. He just wanted a stage in which he could ignorantly bleat on about the evils of “big tech” and how Section 230 is a problem. You can see the entire thing here. Almost everything Hawley says is either wrong or misleading.
Hawley starts out, as is the standard operating procedure for grandstanding politicians to demand we all just “think about the children” he is using as props for political gain. Is he looking out for their health and safety with better schooling and better healthcare? Is he looking to help protect them from the threat of school shootings? Of course not. He’s mad that the internet exists.
He claims that at last week’s hearing Mark Zuckerberg felt “forced to apologize to the parents there in the room” because of what he heard at that hearing. Of course, anyone who actually watched the hearing knows that the only thing that forced Zuck to apologize was Josh Hawley demanding he apologize (and also demanding Zuck give money to the people in the room, which was just weird).
Hawley also showed this chart to claim it shows just how bad the tech companies are… but that report shows the opposite, as anyone with any knowledge in this space would know. It shows how the companies have gotten much better at finding and reporting CSAM on their platforms to the CyberTipline run by NCMEC. To use that chart to say it proves the companies are a problem is just flat out stupid. The companies are following the law, and reporting the CSAM they find, and they’ve gotten better (through hash matching and other technologies) at finding, blocking, and reporting this material. That should be a cause for celebration. Instead people like Hawley misrepresent it as showing the companies aren’t doing enough.
What it really shows, though, is that the DOJ isn’t doing enough. NCMEC then reports this information to the DOJ who can go after the traffickers, but the DOJ generally ignores much of this. This is why we think Senator Ron Wyden’s bill to actually get the DOJ to do its job and to give NCMEC more resources makes much more sense. But Hawley isn’t pushing that bill. He’s pushing this terrible one.
Why? Because he hates Section 230 and is deliberately misrepresenting 230 to pretend that’s what allows companies to not be held liable for Section 230.
Because Hawley then goes on to complain about Section 230 is the problem. But… that’s just factually false. Section 230 directly and explicitly exempts federal criminal law, including laws relating to the sexual exploitation of children. And, every platform knows full well that if it becomes aware of CSAM it is in legal deep shit if it does not report it to the CyberTipline as soon as possible. That’s got literally nothing to do with Section 230. And changing Section 230 won’t change any of that.
Hawley’s rant about 230 is just fundamentally stupid:
Oh and the tech executives, they know all about it, Mr. President, and they’re not doing a thing about it. Why? Because they are not accountable. Here’s the bottom line. This is the only industry in the country that can make a product that will literally kill you and, if it does, you cannot do anything about it. If it kills your child, you can’t do anything about it. If it harms you, you can’t do anything about it. Just think about it for a second.
In this country, if a Coca Cola manufacturer makes a bottle that explodes in your hands, you can sue them. If a drug company makes drugs that are full of adulterated products that cause harms that are not disclosed that kill people, you can sue them. If an automobile maker makes cars that explode, you can sue ‘em. Not these companies. No (obnoxious fake chuckle), not these companies. These companies have a special immunity from suit.
So, basically all of that is bullshit. 100% bullshit. Unadulterated, harmful bullshit. Can we sue Hawley over that? Of course not. Because he has immunity. Why? Well, first, as an elected official he has a special immunity just for Congress under the speech & debate clause.
But also, because of the 1st Amendment. He is free to mislead the public with impunity because the 1st Amendment allows it.
To respond to the specifics here: (1) Section 230’s immunity does not apply to CSAM. (2) The immunity that is provided to all internet users and websites (not just some special industry) only applies to holding one party liable for a third party’s speech. (3) The examples of physical harm are totally inapplicable here. If Facebook literally exploded and killed someone you could still sue. The problem is that Facebook doesn’t explode. It’s not “Facebook” that is causing the harm here, it’s users on Facebook and their speech. And that’s why the 1st Amendment issue comes back up again. Which Hawley pretends not to understand. Oh, and also the claim that the tech industry isn’t “doing anything” is proven directly and obviously false by his chart in the image above, which shows they are reporting tons of accounts to NCMEC.
Thankfully, Senator Ron Wyden stood up to object to Hawley, and did so passionately. He highlighted how terrible CSAM is and the “monsters” behind it. But noted that the STOP CSAM bill does not actually help. Indeed, the attack on encryption in STOP CSAM would put people at much greater risk by removing important protections from everyone.
He also highlighted his own bill (which again, everyone is ignoring) which actually would help protect kids.
Hawley then got back up and claimed that the bill explicitly says it doesn’t outlaw encryption, but that’s incredibly misleading. It pretends to do so, just like similar language in the EARN IT Act did. It says that the bill shouldn’t be interpreted to impact encryption, but still allows plaintiffs to point to encryption as evidence of negligence, thereby making it a liability to offer encrypted communications.
Wyden followed up by pointing out that it’s weird that Hawley is even pushing this bill right now, when Senator Durbin (who is the author/sponsor of the bill) is currently going around shopping a greatly amended version of the bill (Hawley was pushing for unanimous consent on the old version).
Of course, this was all theater. Hawley knew it wasn’t going to happen. He just wanted airtime to lie about the tech industry and about Section 230. Because better to do that kind of grandstanding than to deal with his own home state press calling out his support for insurrectionists, or how he’s making the problem at the border worse himself because he thinks it will harm President Biden.
Hawley, of course, is not a real leader. He needs to deflect and distract. And that’s all this little show was. That he’s using children as props and lying about the law is a small consequence for him as he tries to lead a populist charge to hide his own failings.
Filed Under: encryption, grandstanding, josh hawley, protect the children, ron wyden, section 230, stop csam
As Congress Rushes To Force Websites To Age Verify Users, Its Own Think Tank Warns There Are Serious Pitfalls
from the a-moral-panic-for-the-ages dept
We’re in the midst of a full blown mass hysteria moral panic claiming that the internet is “dangerous” for children, despite little evidence actually supporting that hypothesis, and a ton arguing the opposite is true. States are passing bad laws, and Congress has a whole stack of dangerous “for the children” laws, from KOSA to the RESTRICT Act to STOP CSAM to EARN IT to the Cooper Davis Act to the “Protecting Kids on Social Media Act” to COPPA 2.0. There are more as well, but these are the big ones that all seem to be moving through Congress.
The NY Times has a good article reminding everyone that we’ve been through this before, specifically with Reno v. ACLU, a case we’ve covered many times before. In the 1990s, a similar evidence-free mass hysteria moral panic about the internet and kids was making the rounds, much of it driven by sensational headlines and stories that were later debunked. But Congress, always happy to announce they’ve “protected the children,” passed the Communications Decency Act from Senator James Exon, which he claimed would clean up all the smut he insisted was corrupting children (he famously carried around a binder full of porn that he claimed was from the internet to convince other Senators).
You know what happened next: the Supreme Court (thankfully) remembered that the 1st Amendment existed, and noted that it also applied to the internet, and Exon’s Communications Decency Act (everything except for the Cox/Wyden bit which is now known as Section 230) got tossed out as unconstitutional.
It remains bizarre to me that all these members of Congress today don’t seem to recognize that the ruling in ACLU v. Reno existed, and how all their laws seem to ignore it. But perhaps that’s because it happened 25 years ago and their memories don’t stretch back that far.
But, the NY Times piece ends with something a bit more recent: it points to an interesting Congressional Research Service report that basically tells Congress that any attempt to pass a law targeting minors online will have massive consequences beyond what these elected officials intend.
As we’ve discussed many times in the past, the Congressional Research Service (CRS) is Congress’ in-house think tank, which is well known for producing non-political, very credible research, which is supposed to better inform Congress, and perhaps stop them from passing obviously problematic bills that they don’t understand.
The report focuses on age verification techniques, which most of these laws will require (even though some of them pretend not to: the liability for failure will drive many sites to adopt it anyway). But the CRS notes, it’s just not that easy. Almost every solution out there has real (and serious) problems, either in how well they work, or what they mean for user privacy:
Providers of online services may face different challenges using photo ID to verify users’ ages, depending on the type of ID used. For example, requiring a government-issued ID might not be feasible for certain age groups, such as those younger than 13. In 2020, approximately 25% and 68% of individuals who were ages 16 and 19, respectively, had a driver’s license. This suggests that most 16 year olds would not be able to use an online platform that required a driver’s license. Other forms of photo ID, such as student IDs, could expand age verification options. However, it may be easier to falsify a student ID than a driver’s license. Schools do not have a uniform ID system, and there were 128,961 public and private schools—including prekindergarten through high school—during the 2019-2020 school year, suggesting there could be various forms of IDs that could make it difficult to determine which ones are fake.
Another option could be creating a national digital ID for all individuals that includes age. Multiple states are exploring digital IDs for individuals. Some firms are using blockchain technologies to identify users, such as for digital wallets and for individuals’ health credentials. However, a uniform national digital ID system does not exist in the United States. Creating such a system could raise privacy and security concerns, and policymakers would need to determine who would be responsible for creating and maintaining the system, and verifying the information on it—responsibilities currently reserved to the states.
Several online service providers are relying on AI to identify users’ ages, such as the services offered by Yoti, prompting firms to offer AI age verification services. For example, Intellicheck uses facial biometric data to validate an ID by matching it to the individual. However, AI technologies have raised concerns about potential biases and a lack of transparency. For example, the accuracy of facial analysis software can depend on the individual’s gender, skin color, and other factors. Some have also questioned the ability of AI software to distinguish between small differences in age, particularly when individuals can use make-up and props to appear older.
Companies can also rely on data obtained directly from users or from other sources, such as data brokers. For example, a company could check a mobile phone’s registration information or analyze information on the user’s social media account. However, this could heighten data privacy concerns regarding online consumer data collection.
In other words, just as the French data protection agency found, there is no age verification solution out there that is actually safe for people to rely on. Of course, that hasn’t stopped moral panicky French lawmakers from pushing forward with a requirement for one anyway, and it looks like the US Congress will similarly ignore its own think tank, and Supreme Court precedent, and push forward with their own versions as well.
Hopefully, the Supreme Court actually remembers how all this works.
Filed Under: age verification, children, crs, free speech, kosa, protect the children, restrict act, stop csam