csam – Techdirt (original) (raw)

Durov’s Arrest Details Released, Leaving More Questions Than Answers

from the still-concerning dept

Is the arrest of Pavel Durov, founder of Telegram, a justified move to combat illegal activities, or is it a case of dangerous overreach that threatens privacy and free speech online? We had hoped that when French law enforcement released the details of the charges we’d have a better picture of what happened. Instead, we’re actually just left with more questions and concerns.

Earlier today we wrote about the arrest and how it already raised a lot of questions that didn’t have easy answers. Soon after that post went out, the Tribunal Judiciaire de Paris released a press release with some more details about the investigation (in both French and English). All it does is leave most of the questions open, which might suggest they don’t have very good answers.

First, the report notes “the context of the judicial investigation” which may be different from what he is eventually charged with, though the issues are listed as “charges.”

I would bucket the list of charges into four categories, each of which raise concerns. If I had to put these in order of greatest concern to least, it would be as follows:

  1. Stuff about encryption. The last three charges are all variations on “providing a cryptology service/tool” without some sort of “prior declaration” or “certified declaration.” Apparently, France (like some other countries) has certain import/export controls on encryption. It appears they’re accusing Durov of violating those by not going through the official registration process. But, here, it’s hard not to see that as totally pretextual: an excuse to arrest Durov over other stuff they don’t like him doing.
  2. “Complicity” around a failure to moderate illegal materials. There are a number of charges around this. Complicity to “enable illegal transactions” for “possessing” and “distributing” CSAM, for selling illegal drugs, hacking tools, and organized fraud. But what is the standard for “complicity” here? This is where it gets worrisome. If it’s just a failure to proactively moderate, that seems very problematic. If it’s ignoring direct reports of illegal behavior, then it may be understandable. If it’s more directly and knowingly assisting criminal behavior, then things get more serious. But the lack of details here make me worry it’s the earlier options.
  3. Refusal to cooperate with law enforcement demands for info: This follows on from my final point in number two. There’s a suggestion in the charges (the second one) that Telegram potentially ignored demands from law enforcement. It says there was a “refusal to communicate, at the request of competent authorities, information or documents necessary for carrying out and operating interceptions allowed by law.” This could be about encryption, and a refusal to provide info they didn’t have, or about not putting in a backdoor. If it’s either of those, that would be very concerning. However, if it’s just “they didn’t respond to lawful subpoenas/warrants/etc.” that… could be something that’s more legitimate.
  4. Finally, money laundering. Again, this one is a bit unclear, but it says “laundering of the proceeds derived from organized group’s offences and crimes.” It’s difficult to know how serious any of this is, as that could represent something legitimate, or it could be French law enforcement saying “and they profited off all of this!” We’ve seen charges in other contexts where the laundering claims are kind of thrown in. Details could really matter here.

In the end, though, a lot of this does seem potentially very problematic. So far, there’s been no revelation of anything that makes me say “oh, well, that seems obviously illegal.” A lot of the things listed in the charge sheet are things that lots of websites and communications providers could be said to have done themselves, though perhaps to a different degree.

So we still don’t really have enough details to know if this is a ridiculous arrest, but it does seem to be trending towards that so far. Yes, some will argue that Durov somehow “deserves” this for hosting bad content, but it’s way more complicated than that.

I know from the report that Stanford put out earlier this year that Telegram does not report CSAM to NCMEC at all. That is very stupid. I would imagine Telegram would argue that as a non-US company it doesn’t have to abide by such laws. These charges are in France rather than the US, but it still seems bad that the company does not report any CSAM to the generally agreed-upon organization that handles such reports, and to which companies operating in the US have a legal requirement to report.

But, again, there are serious questions about where you draw these lines. CSAM is content that is outright illegal. But some other stuff may just be material that some people dislike. If the investigation is focused just on the outright illegal content that’s one thing. If it’s not, then this starts to look worse.

On top of that, as always, are the intermediary liability questions, where the question should be how much responsibility a platform has for its users’ use of the system. The list of “complicity” in various bad things worries me because every platform has some element of that kind of content going on, in part because it’s impossible to stop entirely.

And, finally, as I mentioned earlier today, it still feels like many of these issues would normally be worthy of a civil procedure, perhaps by the EU, rather than a criminal procedure by a local court in France.

So in the end, while it’s useful to see the details of this investigation, and it makes me lean ever so slightly in the direction of thinking these potential charges go too far, we’re still really missing many of the details. Nothing released today has calmed the concerns that this is overreach, but nothing has made it clear that it definitely is overreach either.

Filed Under: complicity, content moderation, csam, encryption, france, law enforcement, pavel durov
Companies: telegram

Suing Apple To Force It To Scan iCloud For CSAM Is A Catastrophically Bad Idea

from the this-would-make-it-harder-to-catch-criminals dept

There’s a new lawsuit in Northern California federal court that seeks to improve child safety online but could end up backfiring badly if it gets the remedy it seeks. While the plaintiff’s attorneys surely mean well, they don’t seem to understand that they’re playing with fire.

The complaint in the putative class action asserts that Apple has chosen not to invest in preventive measures to keep its iCloud service from being used to store child sex abuse material (CSAM) while cynically rationalizing the choice as pro-privacy. This decision allegedly harmed the Jane Doe plaintiff, a child whom two unknown users contacted on Snapchat to ask for her iCloud ID. They then sent her CSAM over iMessage and got her to create and send them back CSAM of herself. Those iMessage exchanges went undetected, the lawsuit says, because Apple elected not to employ available CSAM detection tools, thus knowingly letting iCloud become “a safe haven for CSAM offenders.” The complaint asserts claims for violations of federal sex trafficking law, two states’ consumer protection laws, and various torts including negligence and products liability.

Here are key passages from the complaint:

[Apple] opts not to adopt industry standards for CSAM detection… [T]his lawsuit … demands that Apple invest in and deploy means to comprehensively … guarantee the safety of children users. … [D]espite knowing that CSAM is proliferating on iCloud, Apple has “chosen not to know” that this is happening … [Apple] does not … scan for CSAM in iCloud. … Even when CSAM solutions … like PhotoDNA[] exist, Apple has chosen not to adopt them. … Apple does not proactively scan its products or services, including storages [sic] or communications, to assist law enforcement to stop child exploitation. …

According to [its] privacy policy, Apple had stated to users that it would screen and scan content to root out child sexual exploitation material. … Apple announced a CSAM scanning tool, dubbed NeuralHash, that would scan images stored on users’ iCloud accounts for CSAM … [but soon] Apple abandoned its CSAM scanning project … it chose to abandon the development of the iCloud CSAM scanning feature … Apple’s Choice Not to Employ CSAM Detection … Is a Business Choice that Apple Made. … Apple … can easily scan for illegal content like CSAM, but Apple chooses not to do so. … Upon information and belief, Apple … allows itself permission to screen or scan content for CSAM content, but has failed to take action to detect and report CSAM on iCloud. …

[Questions presented by this case] include: … whether Defendant has performed its duty to detect and report CSAM to NCMEC [the National Center for Missing and Exploited Children]. … Apple … knew or should have known that it did not have safeguards in place to protect children and minors from CSAM. … Due to Apple’s business and design choices with respect to iCloud, the service has become a go-to destination for … CSAM, resulting in harm for many minors and children [for which Apple should be held strictly liable] … Apple is also liable … for selling defectively designed services. … Apple owed a duty of care … to not violate laws prohibiting the distribution of CSAM and to exercise reasonable care to prevent foreseeable and known harms from CSAM distribution. Apple breached this duty by providing defective[ly] designed services … that render minimal protection from the known harms of CSAM distribution. …

Plaintiff [and the putative class] … pray for judgment against the Defendant as follows: … For [an order] granting declaratory and injunctive relief to Plaintiff as permitted by law or equity, including: Enjoining Defendant from continuing the unlawful practices as set forth herein, until Apple consents under this court’s order to … [a]dopt measures to protect children against the storage and distribution of CSAM on the iCloud … [and] [c]omply with quarterly third-party monitoring to ensure that the iCloud product has reasonably safe and easily accessible mechanisms to combat CSAM….”

What this boils down to: Apple could scan iCloud for CSAM, and has said in the past that it would and that it does, but in reality it chooses not to. The failure to scan is a wrongful act for which Apple should be held liable. Apple has a legal duty to scan iCloud for CSAM, and the court should make Apple start doing so.

This theory is perilously wrong.

The Doe plaintiff’s story is heartbreaking, and it’s true that Apple has long drawn criticism for its approach to balancing multiple values such as privacy, security, child safety, and usability. It is understandable to assume that the answer is for the government, in the form of a court order, to force Apple to strike that balance differently. After all, that is how American society frequently remedies alleged shortcomings in corporate practices.

But this isn’t a case about antitrust, or faulty smartphone audio, or virtual casino apps (as in other recent Apple class actions). Demanding that a court force Apple to change its practices is uniquely infeasible, indeed dangerous, when it comes to detecting illegal material its users store on its services. That’s because this demand presents constitutional issues that other consumer protection matters don’t. Thanks to the Fourth Amendment, the courts cannot force Apple to start scanning iCloud for CSAM; even pressuring it to do so is risky. Compelling the scans would, perversely, make it way harder to convict whoever the scans caught. That’s what makes this lawsuit a catastrophically bad idea.

(The unconstitutional remedy it requests isn’t all that’s wrong with this complaint, mind. Let’s not get into the Section 230 issues it waves away in two conclusory sentences. Or how it mistakes language in Apple’s privacy policy that it “may” use users’ personal information for purposes including CSAM scanning, for an enforceable promise that Apple would do that. Or its disingenuous claim that this isn’t an attack on end-to-end encryption. Or the factually incorrect allegation that “Apple does not proactively scan its products or services” for CSAM at all, when in fact it does for some products. Let’s set all of that aside. For now.)

The Fourth Amendment to the U.S. Constitution protects Americans from unreasonable searches and seizures of our stuff, including our digital devices and files. “Reasonable” generally means there’s a warrant for the search. If a search is unreasonable, the usual remedy is what’s called the exclusionary rule: any evidence turned up through the unconstitutional search can’t be used in court against the person whose rights were violated.

While the Fourth Amendment applies only to the government and not to private actors, the government can’t use a private actor to carry out a search it couldn’t constitutionally do itself. If the government compels or pressures a private actor to search, or the private actor searches primarily to serve the government’s interests rather than its own, then the private actor counts as a government agent for purposes of the search, which must then abide by the Fourth Amendment, otherwise the remedy is exclusion.

If the government – legislative, executive, or judiciary – forces a cloud storage provider to scan users’ files for CSAM, that makes the provider a government agent, meaning the scans require a warrant, which a cloud services company has no power to get, making those scans unconstitutional searches. Any CSAM they find (plus any other downstream evidence stemming from the initial unlawful scan) will probably get excluded, but it’s hard to convict people for CSAM without using the CSAM as evidence, making acquittals likelier. Which defeats the purpose of compelling the scans in the first place.

Congress knows this. That’s why, in the federal statute requiring providers to report CSAM to NCMEC when they find it on their services, there’s an express disclaimer that the law does not mean they must affirmatively search for CSAM. Providers of online services may choose to look for CSAM, and if they find it, they have to report it – but they cannot be forced to look.

Now do you see the problem with the Jane Doe lawsuit against Apple?

This isn’t a novel issue. Techdirt has covered it before. It’s all laid out in a terrific 2021 paper by Jeff Kosseff. I have also discussed this exact topic over and over and over and over and over and over again. As my latest publication (based on interviews with dozens of people) describes, all the stakeholders involved in combating online CSAM – tech companies, law enforcement, prosecutors, NCMEC, etc. – are excruciatingly aware of the “government agent” dilemma, and they all take great care to stay very far away from potentially crossing that constitutional line. Everyone scrupulously preserves the voluntary, independent nature of online platforms’ decisions about whether and how to search for CSAM.

And now here comes this lawsuit like the proverbial bull in a china shop, inviting a federal court to destroy that carefully maintained and exceedingly fragile dynamic. The complaint sneers at Apple’s “business choice” as a wrongful act to be judicially reversed rather than something absolutely crucial to respect.

Fourth Amendment government agency doctrine is well-established, and there are numerous cases applying it in the context of platforms’ CSAM detection practices. Yet Jane Doe’s counsel don’t appear to know the law. For one, their complaint claims that “Apple does not proactively scan its products or services … to assist law enforcement to stop child exploitation.” Scanning to serve law enforcement’s interests would make Apple a government agent. Similarly, the complaint claims Apple “has failed to take action to detect and report CSAM on iCloud,” and asks “whether Defendant has performed its duty to detect and report CSAM to NCMEC.” This conflates two critically distinct actions. Apple does not and cannot have any duty to detect CSAM, as expressly stated in the statute imposing a duty to report CSAM. It’s like these lawyers didn’t even read the entire statute, much less any of the Fourth Amendment jurisprudence that squarely applies to their case.

Any competent plaintiff’s counsel should have figured this out before filing a lawsuit asking a federal court to make Apple start scanning iCloud for CSAM, thereby making Apple a government agent, thereby turning the compelled iCloud scans into unconstitutional searches, thereby making it likelier for any iCloud user who gets caught to walk free, thereby shooting themselves in the foot, doing a disservice to their client, making the situation worse than the status quo, and causing a major setback in the fight for child safety online.

The reason nobody’s filed a lawsuit like this against Apple to date, despite years of complaints from left, right, and center about Apple’s ostensibly lackadaisical approach to CSAM detection in iCloud, isn’t because nobody’s thought of it before. It’s because they thought of it and they did their fucking legal research first. And then they backed away slowly from the computer, grateful to have narrowly avoided turning themselves into useful idiots for pedophiles. But now these lawyers have apparently decided to volunteer as tribute. If their gambit backfires, they’ll be the ones responsible for the consequences.

Riana Pfefferkorn is a policy fellow at Stanford HAI who has written extensively about the Fourth Amendment’s application to online child safety efforts.

Filed Under: 4th amendment, class action, csam, evidence, proactive scanning, scanning
Companies: apple

Techdirt Podcast Episode 390: The Challenges Facing NCMEC’s CyberTipline

from the looking-closer dept

The National Center for Missing & Exploited Children‘s CyberTipline is a central component of the fight against child sexual abuse material (CSAM) online, but there have been a lot of questions about how well it truly works. A recent report from the Stanford Internet Observatory, which we’ve published two recent posts about, provides an extremely useful window into the system. This week, we’re joined by two of the report’s authors, Shelby Grossman and Riana Pfefferkorn, to dig into the content of the report and the light it sheds on the challenges faced by the CyberTipline.

Follow the Techdirt Podcast on Soundcloud, subscribe via Apple Podcasts or Spotify, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.

Filed Under: csam, cybertipline, riana pfefferkorn, shelby grossman
Companies: ncmec

The REPORT Act: Enhancing Online Child Safety Without the Usual Congressional Nonsense

from the a-good-bill?-for-the-children? dept

For years and years, Congress has been pushing a parade of horrible “protect the children online” bills that seem to somehow get progressively worse each time. I’m not going through the entire list of them, because it’s virtually endless.

One of the most frustrating things about those bills, and the pomp and circumstance around them, is that it ignores the simpler, more direct things that Congress could do that would actually help.

Just last week, we wrote about the Stanford Internet Observatory’s big report on the challenges facing the CyberTipline, run by the National Center for Missing & Exploited Children (NCMEC). We wrote two separate posts about the report (and also discussed it on the latest episode of our new podcast, Ctrl-Alt-Speech) because there was so much useful information in there. As we noted, there are real challenges in making the reporting of child sexual abuse material (CSAM) work better, and it’s not because people don’t want to help. It’s actually because of a set of complex issues that are not easily solvable (read the report or my articles for more details).

But there were still a few clear steps that could be taken by Congress to help.

This week, the REPORT Act passed Congress, and it includes… a bunch of those straightforward, common sense things that should help improve the CyberTipline process. The key bit is allowing the CyberTipline to modernize a bit, including allowing it to use cloud storage. To date, no cloud storage vendors could work with NCMEC, out of a fear that they’d face criminal liability for “hosting CSAM.”

This bill fixes that, and should enable NCMEC to make use of some better tools and systems, including better classifiers, which are becoming increasingly important.

There are also some other factors around letting victims and parents of victims report CSAM involving the child directly to NCMEC, which can be immensely helpful in trying to stop the spread of some content (and on focusing some law enforcement responses).

There are also some technical fixes that require platforms to retain certain records for a longer period of time. This was another important point that was highlighted in the Stanford report. Given the flow of information and prioritization, sometimes by the time law enforcement realized it should get a warrant to get more info from a platform, the platform would have already deleted it as required under existing law. Now that time period is extended to give law enforcement a bit more time.

The one bit that we’ll have to see how it works is that it extends the reporting requirements for social media to include violations of 18 USC 1591, which is the law against sex trafficking. Senator Marsha Blackburn, who is the co-author of the bill, is claiming that this means that “big tech companies will now be required to report when children are being trafficked, groomed or enticed by predators.”

Image

So, it’s possible I’m misreading the law (and how it works with existing laws…) but I see nothing limiting this to “big tech.” It appears to apply to any “electronic communication service provider or remote computing service.”

Also, given that Marsha Blackburn appears to consider “grooming” to include things like LGBTQ content in schools, I worried that this was going to be a backdoor bill to making all internet websites have to “report” such content to NCMEC, which would flood their systems with utter nonsense. Thankfully, 1591 seems to include some pretty specific definitions of sex trafficking that do not match up with Blackburn’s definition. So she’ll get the PR victory among nonsense peddlers for pretending that it will lead to the reporting of the non-grooming that she insists is grooming.

And, of course, while this bill was actually good (and it’s surprising to see Blackburn on a good internet bill!) it’s not going to stop her from continuing to push KOSA and other nonsense moral panic “protect the children” bills that will actually do real harm.

Filed Under: csam, cybertipline, jon ossoff, marsha blackburn, modernization, report act, sex trafficking
Companies: ncmec

The Problems Of The NCMEC CyberTipline Apply To All Stakeholders

from the no-easy-answers dept

The failures of the NCMEC CyberTipline to combat child sexual abuse material (CSAM) as well as it could are extremely frustrating. But as you look at the details, you realize there just aren’t any particularly easy fixes. While there are a few areas that could improve things at the margin, the deeper you look, the more challenging the whole setup is. There aren’t any easy answers.

And that sucks, because Congress and the media often expect easy answers to complex problems. And that might not be possible.

This is the second post about the Stanford Internet Observatory’s report on the NCMEC CyberTipline, which is the somewhat useful, but tragically limited, main way that investigations of child sexual abuse material (CSAM) online is done. In the first post, we discussed the structure of the system, and how the incentive structure regarding law enforcement is a big part of what’s making the system less impactful than it otherwise might be.

In this post, I want to dig in a little more about the specific challenges in making the CyberTipline work better.

The Constitution

I’m not saying that the Constitution is a problem, but it represents a challenge here. In the first post, I briefly mentioned Jeff Kosseff’s important article about how the Fourth Amendment and the structure of NCMEC makes things tricky, but it’s worth digging in a bit here to understand the details.

The US government set up NCMEC as a private non-profit in part because if it were a government agency doing this work, there would be significant concerns about whether or not the evidence it gets was collected with or without a warrant under the Fourth Amendment. If it’s a government agency, then the law cannot require companies to hand over the info without a warrant.

So, Congress did a kind of two-step dance here: they set up this “private” non-profit, and then created a law that requires companies that come across CSAM online to report it to the organization. And all of this seems to rely on a kind of fiction that if we pretend NCMEC isn’t a government agent, then there’s no 4th Amendment issue.

From the Stanford report:

The government agent doctrine explains why Section 2258A allows, but does not require, online platforms to search for CSAM. Indeed, the statute includes an express disclaimer that it does not require any affirmative searching or monitoring. Many U.S. platforms nevertheless proactively monitor their services for CSAM, yielding millions of CyberTipline reports per year. Those searches’ legality hinges on their voluntariness. The Fourth Amendment prohibits unreasonable searches and seizures by the government; warrantless searches are typically considered unreasonable. The Fourth Amendment doesn’t generally bind private parties, however the government may not sidestep the Fourth Amendment by making a private entity conduct a search that it could not constitutionally do itself. If a private party acts as the government’s “instrument or agent” rather than “on his own initiative” in conducting a search, then the Fourth Amendment does apply to the search. That’s the case where a statute either mandates a private party to search or “so strongly encourages a private party to conduct a search that the search is not primarily the result of private initiative.” And it’s also true in situations where, with the government’s knowledge or acquiescence, a private actor carries out a search primarily to assist the government rather than to further its own purposes, though this is a case-by-case analysis for which the factors evaluated vary by court.

Without a warrant, searches by government agents are generally unconstitutional. The usual remedy for an unconstitutional search is for a court to throw out all evidence obtained as a result of it (the so-called “exclusionary rule”). If a platform acts as a government agent when searching a user’s account for CSAM, there is a risk that the resulting evidence could not be introduced against the user in court, making a conviction (or plea bargain) harder for the prosecution to obtain. This is why Section 2258A does not and could not require online platforms to search for CSAM: it would be unconstitutional and self-defeating.

In CSAM cases involving CyberTipline reports, defendants have tried unsuccessfully to characterize platforms as government agents whose searches were compelled by Section 2258A and/or by particular government agencies or investigators. But courts, pointing to the statute’s express disclaimer language (and, often, the testimony of investigators and platform employees), have repeatedly held that platforms are not government agents and their CSAM searches were voluntary choices motivated mainly by their own business interests in keeping such repellent material off their services.

So, it’s quite important that the service providers that are finding and reporting CSAM are not seen as agents of the government. It would destroy the ability to use that evidence in prosecuting cases. That’s important. And, as the report notes, it’s also why it would be a terrible idea to require social media to proactively try to hunt down CSAM. If the government required it, it would effectively light all that evidence on fire and prevent using it for prosecution.

That said, the courts (including in a ruling by Neil Gorsuch while he was on the appeals court) have made it clear that, while platforms may not be government agents, it’s pretty damn clear that NCMEC and the CyberTipline are. And that creates some difficulties.

In a landmark case called Ackerman, one federal appeals court held that NCMEC is a “governmental entity or agent.” Writing for the Tenth Circuit panel, then-judge Neil Gorsuch concluded that NCMEC counts as a government entity in light of NCMEC’s authorizing statutes and the functions Congress gave it to perform, particularly its CyberTipline functions. Even if NCMEC isn’t itself a governmental entity, the court continued, it acted as an agent of the government in opening and viewing the defendant’s email and four attached images that the online platform had (as required) reported to NCMEC. The court ruled that those actions by NCMEC were a warrantless search that rendered the images inadmissible as evidence. Ackerman followed a trial court-level decision, Keith, which had also deemed NCMEC a government agent: its review of reported images served law enforcement interests, it operated the CyberTipline for public not private interests, and the government exerts control over NCMEC including its funding and legal obligations. As an appellate-level decision, Ackerman carries more weight than Keith, but both have proved influential.

The private search doctrine is the other Fourth Amendment doctrine commonly raised in CSAM cases. It determines what the government or its agents may view without a warrant upon receiving a CyberTipline report from a platform. As said, the Fourth Amendment generally does not apply to searches by private parties. “If a private party conducted an initial search independent of any agency relationship with the government,” the private search doctrine allows law enforcement (or NCMEC) to repeat the same search so long as they do not exceed the original private search’s scope. Thus, if a platform reports CSAM that its searches had flagged, NCMEC and law enforcement may open and view the files without a warrant so long as someone at the platform had done so already. The CyberTipline form lets the reporting platform indicate which attached files it has reviewed, if any, and which files were publicly available.

For files that were not opened by the platform (such as where a CyberTipline submission is automated without any human review), Ackerman and a 2021 Ninth Circuit case called Wilson hold that the private search exception does not apply, meaning the government or its agents (i.e., NCMEC) may not open the unopened files without a warrant. Wilson disagreed with the position, adopted by two other appeals-court decisions, that investigators’ warrantless opening of unopened files is permissible if the files are hash matches for files that had previously been viewed and confirmed as CSAM by platform personnel. Ackerman concluded by predicting that law enforcement “will struggle not at all to obtain warrants to open emails when the facts in hand suggest, as they surely did here, that a crime against a child has taken place.”

To sum up: Online platforms’ compliance with their CyberTipline reporting obligations does not convert them into government agents so long as they act voluntarily in searching their platforms for CSAM. That voluntariness is crucial to maintaining the legal viability of the millions of reports platforms make to the CyberTipline each year. This imperative shapes the interactions between platforms and U.S.-based legislatures, law enforcement, and NCMEC. Government authorities must avoid crossing the line into telling or impermissibly pressuring platforms to search for CSAM or what to search for and report. Similarly, platforms have an incentive to maintain their CSAM searches’ independence from government influence and to justify those searches on rationales “separate from assisting law enforcement.” When platforms (voluntarily) report suspected CSAM to the CyberTipline, Ackerman and Wilson interpret the private search doctrine to let law enforcement and NCMEC warrantlessly open and view only user files that had first been opened by platform personnel before submitting the tip or were publicly available.

This is all pretty important in making sure that the whole system stays on the right side of the 4th Amendment. As much as some people really want to force social media companies to proactively search for and report CSAM, mandating that creates real problems under the 4th Amendment.

As for the NCMEC and law enforcement side of things, the requirement to get a warrant for unopened communications remains important. But, as noted below, sometimes law enforcement doesn’t want to get a warrant. If you’ve been reading Techdirt for any length of time, this shouldn’t surprise you. We see all sorts of areas where law enforcement refuses to take that basic step of getting a warrant.

Understanding that framing is important to understanding the rest of this, including exploring where each of the stakeholders fall down. Let’s start with the biggest problem of all: where law enforcement fails.

Law Enforcement

In the first article on this report, we noted that the incentive structure has made it such that law enforcement often tries to evade this entire process. It doesn’t want to go through the process of getting warrants some of the time. It doesn’t want to associate with the ICAC task forces because they feel like it puts too much of a burden on them, and if they don’t take care of it, someone else on the task force will. And sometimes they don’t want to deal with CyberTipline reports because they’re afraid that if they’re too slow after getting a report, they might face liability.

Most of these issues seem to boil down to law enforcement not wanting to do its job.

But the report details some of the other challenges for law enforcement. And it starts with just how many reports are coming in:

Almost across the board law enforcement expressed stress over their inability to fully investigate all CyberTipline reports due to constraints in time and resources. An ICAC Task Force officer said “You have a stack [of CyberTipline reports] on your desk and you have to be ok with not getting to it all today. There is a kid in there, it’s really quite horrible.” A single Task Force detective focused on internet crimes against children may be personally responsible for 2,000 CyberTipline reports each year. That detective is responsible for working through all of their tips and either sending them out to affiliates or investigating them personally. This process involves reading the tip, assessing whether a crime was committed, and determining jurisdiction; just determining jurisdiction might necessitate multiple subpoenas. Some reports are sent out to affiliates and some are fully investigated by detectives at the Task Force.

An officer at a Task Force with a relatively high CyberTipline report arrest rate said “we are stretched incredibly thin like everyone.” An officer in a local police department said they were personally responsible for 240 reports a year, and that all of them were actionable. When asked if they felt overwhelmed by this volume, they said yes. While some tips involve self-generated content requiring only outreach to the child, many necessitate numerous search warrants. Another officer, operating in a city with a population of 100,000, reported receiving 18–50 CyberTipline reports annually, actively investigating around 12 at any given time. “You have to manage that between other egregious crimes like homicides,” they said. This report will not extensively cover the issue of volume and law enforcement capacity, as this challenge is already well-documented and detailed in the 2021 U.S. Department of Homeland Security commissioned report, in Cullen et al., and in a 2020 Government Accountability Office report. “People think this is a one-in-a-million thing,” a Task Force officer said. “What they don’t know is that this is a crime of secrecy, and could be happening at four of your neighbors’ houses.”

And of course, making social media platforms more liable doesn’t help to fix much here. At best, it makes it worse because it encourages even more reporting by the platforms, which only further overloads law enforcement.

Given all those reports the cops are receiving, you’d hope they had a good system for managing them. But your hope would not be fulfilled:

Law enforcement pick a certain percentage of reports to investigate. The selection is not done in a very scientific way—one respondent described it as “They hold their finger up in the air to feel the wind.” An ICAC Task Force officer said triage is more of an art than a science. They said that with experience you get a feel for whether a case will have legs, but that you can never be certain, and yet you still have to prioritize something.

That seems less than ideal.

Another problem, though, is that a lot of the reports are not prosecutable at all. Because of the incentives discussed in the first post, apparently certain known memes get reported to the CyberTipline quite frequently, and police feel they just clog up the system. But because the platforms fear significant liability if they don’t report those memes, they keep reporting them.

U.S. law requires that platforms report this content if they find it, and that NCMEC send every report to law enforcement. When NCMEC knows a report contains viral content or memes they will label it “informational,” a category that U.S. law enforcement typically interpret as meaning the report can be ignored, but not all such reports get labeled “informational.” Additionally there are an abundance of “age difficult” reports that are unlikely to lead to prosecution. Law enforcement may have policies requiring some level of investigation or at least processing into all noninformational reports. Consequently, officers often feel inundated with reports unlikely to result in prosecution. In this scenario, neither the platforms, NCMEC, nor law enforcement agencies feel comfortable explicitly ignoring certain types of reports. An employee from a platform that is relatively new to NCMEC reporting expressed the belief that “It’s best to over-report, that’s what we think.”

At best, this seems to annoy law enforcement, but it’s a function of how the system works:

An officer expressed frustration over platforms submitting CyberTipline reports that, in their view, obviously involve adults: “Tech companies have the ability to […] determine with a high level of certainty if it’s an adult, and they need to stop sending [tips of adults].” This respondent also expressed a desire that NCMEC do more filtering in this regard. While NCMEC could probably do this to some extent, they are again limited by the fact that they cannot view an image if the platform did not check the “reviewed” box (Figure 5.3 on page 26). NCMEC’s inability to use cloud services also makes it difficult for them to use machine learning age classifiers. When we asked NCMEC about the hurdles they face, they raised the “firehose of I’ll just report everything” problem.

Again, this all seems pretty messy. Of course you want companies to report anything they find that might be CSAM. And, of course, you want NCMEC to pass them on to law enforcement. But the end result is overwhelmed law enforcement with no clear process for triage and dealing with a lot of reports that were sent in an abundance of caution but which are not at all useful to law enforcement.

And, of course, there are other challenges that policymakers probably don’t think about. For example: how do you deal with hacked accounts? How much information is it right for the company to share with law enforcement?

One law enforcement officer provided an interesting example of a type of report he found frustrating: he said he frequently gets reports from one platform where an account was hacked and then used to share CSAM. This platform provided the dates of multiple password changes in the report, which the officer interpreted as indicating the account had been hacked. Despite this, they felt obligated to investigate the original account holder. In a recent incident they described, they were correct that the account had been hacked. They expressed that if the platform explicitly stated their suspicion in the narrative section of the report, such as by saying something like “we think this account may have been hacked,” they would then feel comfortable de-prioritizing these tips. We subsequently learned from another respondent that this platform provides time stamps for password changes for all of their reports, putting the burden on law enforcement to assess whether the password changes were of normal frequency, or whether they reflected suspicious activity.

With that said, the officer raised a valid issue: whether platforms should include their interpretation of the information they are reporting. One platform employee we interviewed who had previously worked in law enforcement acknowledged that they would have found the platform’s unwillingness to explicitly state their hunch frustrating as well. However, in their current role they also would not have been comfortable sharing a hunch in a tip: “I have preached to the team that anything they report to NCMEC, including contextual information, needs to be 100% accurate and devoid of personal interpretation as much as possible, in part because it may be quoted in legal process and case reports down the line.” They said if a platform states one thing in a tip, but law enforcement discovers that is not the case, that could make it more difficult for law enforcement to prosecute, and could even ruin their case. Relatedly, a former platform employee said some platforms believe if they provide detailed information in their reports courts may find the reports inadmissible. Another platform employee said they avoid sharing such hunches for fear of it creating “some degree of liability [even if ] not legal liability” if they get it wrong

The report details how local prosecutors are also loathe to bring cases, because it’s tricky to find a jury who can handle a CSAM case:

It is not just police chiefs who may shy away from CSAM cases. An assistant U.S. attorney said that potential jurors will disqualify themselves from jury duty to avoid having to think about and potentially view CSAM. As a result, it can take longer than normal to find a sufficient number of jurors, deterring prosecutors from taking such cases to trial. There is a tricky balance to strike in how much content to show jurors, but viewing content may be necessary. While there are many tools to mitigate the effect of viewing CSAM for law enforcement and platform moderators, in this case the goal is to ensure that those viewing the content understand the horror. The assistant U.S. attorney said that they receive victim consent before showing the content in the context of a trial. Judges may also not want to view content, and may not need to if the content is not contested, but seeing it can be important as it may shape sentencing decisions.

There are also issues outside the US with law enforcement. As noted in the first article, NCMEC has become the de facto global reporting center, because so many companies are based in the US and report there. And the CyberTipline tries to share out to foreign law enforcement too, but that’s difficult:

For example, in the European Union, companies’ legal ability to voluntarily scan for CSAM required the passage of a special exception to the EU’s so-called “ePrivacy Directive”. Plus, against a background where companies are supposed to retain personal data no longer than reasonably necessary, EU member states’ data retention laws have repeatedly been struck down on privacy grounds by the courts for retention periods as short as four or ten weeks (as in Germany) and as long as a year (as in France). As a result, even if a CyberTipline report had an IP address that was linked to a specific individual and their physical address at the time of the report, it may not be possible to retrieve that information after some amount of time.

Law enforcement agencies abroad have varying approaches to CyberTipline reports and triage. Some law enforcement agencies will say if they get 500 CyberTipline reports a year, that will be 500 cases. Another country might receive 40,000 CyberTipline reports that led to just 150 search warrants. In some countries the rate of tips leading to arrests is lower than in the U.S. Some countries may find that many of their CyberTipline reports are not violations of domestic law. The age of consent may be lower than in the U.S., for example. In 2021 Belgium received about 15,000 CyberTipline reports, but only 40% contained content that violated Belgium law

And in lower income countries, the problems can be even worse, including confusion about how the entire CyberTipline process works.

We interviewed two individuals in Mexico who outlined a litany of obstacles to investigating CyberTipline reports even where a child is known to be in imminent danger. Mexican federal law enforcement have a small team of people who work to process the reports (in 2023 Mexico received 717,468 tips), and there is little rotation. There are people on this team who have been viewing CyberTipline reports day in and day out for a decade. One respondent suggested that recent laws in Mexico have resulted in most CyberTipline reports needing to be investigated at the state level, but many states lack the know-how to investigate these tips. Mexico also has rules that require only specific professionals to assess the age of individuals in media, and it can take months to receive assessments from these individuals, which is required even if the image is of a toddler

The investigator also noted that judges often will not admit CyberTipline reports as evidence because they were provided proactively and not via a court order as part of an investigation. They may not understand that legally U.S. platforms must report content to NCMEC and that the tips are not an extrajudicial invasion of privacy. As a result, officers may need a court order to obtain information that they already have in the CyberTipline report, confusing platforms who receive requests for data they put in a report a year ago. This issue is not unique to Mexico; NCMEC staff told us that they see “jaws drop” in other countries during trainings when they inform participants about U.S. federal law that requires platforms to report CSAM.

NCMEC Itself

The report also details some of the limitations of NCMEC and the CyberTipline itself, some of which are legally required (and where it seems like the law should be updated).

There appears to be a big issue with repeat reports, where NCMEC needs to “deconflict” them, but has limited technology to do so:

Improvements to the entity matching process would improve CyberTipline report prioritization processes and detection, but implementation is not always as straightforward as it might appear. The current automated entity matching process is based solely on exact matches. Introducing fuzzy matching, which would catch similarity between, for example, bobsmithlovescats1 and bobsmithlovescats2, could be useful in identifying situations where a user, after suspension, creates a new account with an only slightly altered username. With a more expansive entity matching system, a law enforcement officer proposed that tips could gain higher priority if certain identifiers are found across multiple tips. This process, however, may also require an analyst in the loop to assess whether a fuzzy match is meaningful.

It is common to hear of instances where detectives received dozens of separate tips for the same offender. For instance, the Belgium Federal Police noted receiving over 500 distinct CyberTipline reports about a single offender within a span of five months. This situation can arise when a platform automatically submits a tip each time a user attempts to upload CSAM; if the same individual tries to upload the same CSAM 60 times, it could result in 60 separate tips. Complications also arise if the offender uses a Virtual Private Network (VPN); the tips may be distributed across different law enforcement agencies. One respondent told us that a major challenge is ensuring that all tips concerning the same offender are directed to the same agency and that the detective handling them is aware that these numerous tips pertain to a single individual.

As the report notes, there are a variety of challenges, both economic and legal, in enabling NCMEC to upgrade its technology:

First, NCMEC operates with a limited budget and as a nonprofit they may not be able to compete with industry salaries for qualified technical staff. The status quo may be “understandable given resource constraints, but the pace at which industry moves is a mismatch with NCMEC’s pace.” Additionally, NCMEC must also balance prioritizing improving the CyberTipline’s technical infrastructure with the need to maintain the existing infrastructure, review tips, or execute other non-Tipline projects at the organization. Finally, NCMEC is feeding information to law enforcement, which work within bureaucracies that are also slow to update their technology. A change in how NCMEC reports CyberTipline information may also require law enforcement agencies to change or adjust their systems for receiving that information.

NCMEC also faces another technical constraint not shared with most technology companies: because the CyberTipline processes harmful and illegal content, it cannot be housed on commercially available cloud services. While NCMEC has limited legal liability for hosting CSAM, other entities currently do not, which constrains NCMEC’s ability to work with outside vendors. Inability to transfer data to cloud services makes some of NCMEC’s work more resource intensive and therefore stymies some technical developments. Cloud services provide access to proprietary machine learning models, hardware-accelerated machine learning training and inference, on-demand resource availability and easier to use services. For example, with CyberTipline files in the cloud, NCMEC could more easily conduct facial recognition at scale and match photos from the missing children side of their work with CyberTipline files. Access to cloud services could potentially allow for scaled detection of AI-generated images and more generally make it easier for NCMEC to take advantage of existing machine learning classifiers. Moving millions of CSAM files to cloud services is not without risks, and reasonable people disagree about whether the benefits outweigh the risks. For example, using a cloud facial recognition service would mean that a third party service likely has access to the image. There are a number of pending bills in Congress that, if passed, would enable NCMEC to use cloud services for the CyberTipline while providing the necessary legal protections to the cloud hosting providers.

Platforms

And, yes, there are some concerns about the platforms. But while public discussion seems to focus almost exclusively on where people think that platforms have failed to take this issue seriously, the report suggests the failures of platforms are much more limited.

The report notes that it’s a bit tricky to get platforms up and running with CyberTipline reporting, and that even as NCMEC will do some onboarding, it’s very limited to avoid some of the 4th Amendment concerns talked about above.

And, again, some of the problem with onboarding is due to outdated tech on NCMEC’s side. I mean… XML? Really?

Once NCMEC provides a platform with an API key and the corresponding manual, integrating their workflow with the reporting API can still present challenges. The API is XML-based, which requires considerably more code to integrate with than simpler JSON-based APIs and may be unfamiliar to younger developers. NCMEC is aware that this is an issue. “Surprisingly large companies are using the manual form,” one respondent said. One respondent at a small platform had a more moderate view; he thought the API was fine and the documentation “good.” But another respondent called the API “crap.”

There are also challenges under the law about what needs to be reported. As noted above and in the first article, that can often lead to over-reporting. But it can also make things difficult for companies trying to make determinations.

Platforms will additionally face policy decisions. While prohibiting illegal content is a standard approach, platforms often lack specific guidelines for moderators on how to interpret nuanced legal terms such as “lascivious exhibition.” This term is crucial for differentiating between, for example, an innocent photo of a baby in a bathtub, and a similar photo that appears designed to show the baby in a way that would be sexually arousing to a certain type of viewer. Trust and safety employees will need to develop these policies and train moderators.

And, of course, as has been widely discussed elsewhere, it’s not great that platforms have to hire human beings and expose them to this kind of content.

However, the biggest issue on reporting seems to not be a company’s unwillingness to do so, but how much information they pass along. And again, here, the issue is not so much unwillingness of the companies to be cooperative, but the incentives.

Memes and viral content pose a huge challenge for CyberTipline stakeholders. In the best case scenario, a platform checks the “Potential Meme” box and NCMEC automatically sends the report to an ICAC Task Force as “informational,” which appears to mean that no one at the Task Force needs to look at the report.

In practice, a platform may not check the “Potential Meme” box (possibly due to fixable process issues or minor changes in the image that change the hash value) and also not check the “File Viewed by Company” box. In this case NCMEC is unable to view the file, due to the Ackerman and Wilson decisions as discussed in Chapter 3. A Task Force could view the file without a search warrant and realize it is a meme, but even in that scenario it takes several minutes to close out the report. At many Task Forces there are multiple fields that have to be entered to close the report, and if Task Forces are receiving hundreds of reports of memes this becomes hugely time consuming. Sometimes, however, law enforcement may not realize the report is a meme until they have invested valuable time into getting a search warrant to view the report.

NCMEC recently introduced the ability for platforms to “batch report” memes after receiving confirmation from NCMEC that that meme is not actionable. This lets NCMEC label the whole batch as informational, which reduces the burden on law enforcement

We heard about an example where a platform classified a meme as CSAM, but NCMEC (and at least one law enforcement officer we spoke to about this meme) did not classify it as CSAM. NCMEC told the platform they did not classify the meme as CSAM, but according to NCMEC the platform said because they do consider it CSAM they were going to continue to report it. Because the platform is not consistently checking the “Potential Meme” box, law enforcement are still receiving it at scale and spending substantial time closing out these reports.

There is a related challenge when a platform neglects to mark content as “viral”. Most viral images are shared in outrage, not with an intent to harm. However, these viral images can be very graphic. The omission of the “viral” label can lead law enforcement to mistakenly prioritize these cases, unaware that the surge in reports stems from multiple individuals sharing the same image in dismay.

We spoke to one platform employee about the general challenge of a platform deeming a meme CSAM while NCMEC or law enforcement agencies disagree. They noted that everyone is doing their best to apply the Dost test. Additionally, there is no mechanism to get an assurance that a file is not CSAM: “No one blesses you and says you’ve done what you need to do. It’s a very unsettling place to be.” They added that different juries might come to different conclusions about what counts as CSAM, and if a platform fails to report a file that is later deemed CSAM the platform could be fined $300,000 and face significant public backlash: “The incentive is to make smart, conservative decisions.”

This is all pretty fascinating, and suggests that while there may be ways to improve things, it’s difficult to structure things right and make the incentives align properly.

And, again, the same incentives pressure the platforms to just overreport, no matter what:

Once a platform integrates with NCMEC’s CyberTipline reporting API, they are incentivized to overreport. Consider an explicit image of a 22-year-old who looks like they could be 17: if a platform identified the content internally but did not file a report and it turned out to be a 17-year-old, they may have broken the law. In such cases, they will err on the side of caution and report the image. Platform incentives are to report any content that they think is violative of the law, even if it has a low probability of prosecution. This conservative approach will also lead to reports from what Meta describes as “non-malicious users”—for instance, individuals sharing CSAM in outrage. Although such reports could theoretically yield new findings, such as uncovering previously unknown content, it is more likely that they overload the system with extraneous reports

All in all, the real lesson to be taken from this report is that this shit is super complicated, like all of trust & safety, and tradeoffs abound. But here it’s way more fraught than in most cases, both in terms of the seriousness of the issue, the potential for real harm, and the potentially destructive criminal penalties involved.

The report has some recommendations, though they mostly seem to deal with things at the margins: increase funding for NCMEC, allow it to update its technology (and hire the staff to do so), and have some more information to help platforms get set up.

Of course, what’s notable is that this does not include things like “make platforms liable for any mistake they make.” This is because, as the report shows, most platforms seem to take this stuff pretty seriously already, and the liability is already very clear, to the point that they are often over-reporting to avoid it, and that’s actually making the results worse, because they’re overwhelming both NCMEC and law enforcement.

All in all, this report is a hugely important contribution to this discussion, and provides a ton of real-world information about the CyberTipline that were basically only known to people working on it, leaving many observers, media and policymakers in the dark.

It would be nice if Congress reads this report and understands the issues. However, when it comes to things like CSAM, expecting anyone to bother with reading a big report and understanding the tradeoffs and nuances is probably asking too much.

Filed Under: csam, cybertipline, incentives, overreporting
Companies: ncmec

Our Online Child Abuse Reporting System Is Overwhelmed, Because The Incentives Are Screwed Up & No One Seems To Be Able To Fix Them

from the mismatched-incentives-are-the-root-of-all-problems dept

The system meant to stop online child exploitation is failing — and misaligned incentives are to blame. Unfortunately, today’s political solutions, like KOSA and STOP CSAM, don’t even begin to grapple with any of this. Instead, they prefer to put in place solutions that could make the incentives even worse.

The Stanford Internet Observatory has spent the last few months doing a very deep dive on how the CyberTipline works (and where it struggles). It has released a big and important report detailing its findings. In writing up this post about it, I kept adding more and more, to the point that I finally decided it made sense to split it up into two separate posts to keep things manageable.

This first post covers the higher level issue: what the system is, why it works the way it does, and how the incentive structure of the system is completely messed up (even if it was done with good intentions), and how that’s contributed to the problem. A follow-up post will cover the more specific challenges facing NCMEC itself, law enforcement, and the internet platforms themselves (who often take the blame for CSAM, when that seems extremely misguided).

There is a lot of misinformation out there about the best way to fight and stop the creation and spread of child sexual abuse material (CSAM). It’s unfortunate because it’s a very real and very serious problem. Yet the discussion about it is often so disconnected from reality as to be not just unhelpful, but potentially harmful.

In the US, the system that was set up is the CyberTipline, which is run by NCMEC, the National Center on Missing and Exploited Children. It’s a private, non-profit; however, it has a close connection with the US government, which helped create it. At times, there has been some confusion about whether or not NCMEC is a government agent. The entire setup of it was designed to keep it as non-governmental, to avoid any 4th Amendment issues with the information it collects, but courts haven’t always seen it that way, which makes it tricky (even as the 4th Amendment is important).

And while the system was designed for the US, it has become a defacto global system, since so many of the companies are US based, and NCMEC will, when it can, send relevant details to foreign law enforcement as well (though, as the report details, that doesn’t always work well).

The main role CyberTipline has taken on is coordination. It takes in reports of CSAM (mostly, but not entirely, from internet platforms) and then, when relevant, hands off the necessary details to the (hopefully) correct law enforcement agency to handle things.

Companies that host user-generated content have certain legal requirements to report CSAM to the CyberTipline. As we discussed in a recent podcast, this role as a “mandatory reporter” is important in providing useful information to allow law enforcement to step in and actually stop abusive behavior. Because of the “government agent” issue, it would be unconstitutional to require social media platforms to proactively search for and identify CSAM (though many do use tools to do this). However, if they do find some, they must report it.

Unfortunately, the mandatory reporting has also allowed the media and politicians to use the number of reports sent in by social media companies in a misleading manner, suggesting that the mere fact that these companies find and report to NCMEC means that they’re not doing enough to stop CSAM on their platforms.

This is problematic because it creates a dangerous incentive, suggesting that internet services should actually not report CSAM they found, as politicians and the media will falsely portray a lot of reports as being a sign of a failure by the platforms to take this seriously. The reality is that the failure to take things seriously comes from the small number of platforms (Hi Telegram!) who don’t report CSAM at all.

Some of us from the outside have thought that the real issue was that NCMEC and law enforcement had been unsuccessful on the receiving end to take those reports and do enough that was productive with them. It seemed convenient for the media and politicians to just blame social media companies for doing what they’re supposed to do (reporting CSAM), ignoring that what happened on the back end of the system might be the real problem. That’s why things like Senator Ron Wyden’s Invest in Child Safety Act seemed like a better approach than things like KOSA or the STOP CSAM Act.

That’s because the approach of KOSA/STOP CSAM and some other bills is basically to add liability to social media companies. (These companies already do a ton to prevent CSAM from appearing on the platform and alert law enforcement via the CyberTipline when they do find stuff.) But that’s useless if those receiving the reports aren’t able to do much with them.

What becomes clear from this report is that while there are absolutely failures on the law enforcement side, some of that is effectively baked into the incentive structure of the system.

In short, the report shows that the CyberTipline is very helpful in engaging law enforcement to stop some child sexual abuse, but it’s not as helpful as it might otherwise be:

Estimates of how many CyberTipline reports lead to arrests in the U.S. range from 5% to 7.6%

This number may sound low, but I’ve been told it’s not as bad as it sounds. First of all, when a large number of the reports are for content that is overseas and not in the US, it’s more difficult for law enforcement here to do much about it (though, again, the report details some suggestions on how to improve this). Second, some of the content may be very old, where the victim was identified years (or even decades) ago, and where there’s less that law enforcement can do today. Third, there is a question of prioritization, with it being a higher priority to target those directly abusing children. But, still, as the report notes, almost everyone thinks that the arrest number could go higher if there were more resources in place:

Empirically, it is unknown what percent of reports, if fully investigated, would lead to the discovery of a person conducting hands-on abuse of a child. On the one hand, as an employee of a U.S. federal department said, “Not all tips need to lead to prosecution […] it’s like a 911 system.”10 On the other hand, there is a sense from our respondents—who hold a wide array of beliefs about law enforcement—that this number should be higher. There is a perception that more than 5% of reports, if fully investigated, would lead to the discovery of hands-on abuse.

The report definitely suggests that if NCMEC had more resources dedicated to the CyberTipline, it could be more effective:

NCMEC has faced challenges in rapidly implementing technological improvements that would aid law enforcement in triage. NCMEC faces resource constraints that impact salaries, leading to difficulties in retaining personnel who are often poached by industry trust and safety teams.

There appear to be opportunities to enrich CyberTipline reports with external data that could help law enforcement more accurately triage tips, but NCMEC lacks sufficient technical staff to implement these infrastructure improvements in a timely manner. Data privacy concerns also affect the speed of this work.

But, before we get into the specific areas where things can be improved in the follow-up post, I thought it was important to highlight how the incentives of this system contribute to the problem, where there isn’t necessarily an easy solution.

While companies (Meta, mainly, since it represents, by a very wide margin, the largest number of reports to the CyberTipline) keep getting blamed for failing to stop CSAM because of its large number of reports, most companies have very strong incentives to report anything they find. This is because the cost for not reporting something they should have reported is massive (criminal penalties), whereas the cost for over-reporting is nothing to the companies. That means, there’s an issue with overreporting.

Of course, there is a real cost here. CyberTipline employees get overwhelmed, and that can mean that reports that should get prioritized and passed on to law enforcement don’t. So you can argue that while the cost of over-reporting is “nothing” to the companies, the cost to victims and society at large can be quite large.

That’s an important mismatch.

But the broken incentives go further as well. When NCMEC hands off reports to law enforcement, they often go through a local ICAC (Internet Crimes Against Children) task force, who will help triage it and find the right state or local law enforcement agency to handle the report. Different law enforcement agencies who are “affiliated” with ICACs receive special training on how to handle reports from the CyberTipline. But, apparently, at least some of them feel that it’s just too much work, or (in some cases) too burdensome to investigate. That means that some law enforcement agencies are choosing not to affiliate with their local ICACs to avoid this added work. Even worse, some law enforcement agencies have “unaffiliated” themselves with the local ICAC because they just don’t want to deal with it.

In some cases, there are even reports of law enforcement unaffiliating with an ICAC out of a fear of facing liability for not investigating an abused child quickly enough.

A former Task Force officer described the barriers to training more local Task Force affiliates. In some cases local law enforcement perceive that becoming a Task Force affiliate is expensive, but in fact the training is free. In other cases local law enforcement are hesitant to become a Task Force affiliate because they will be sent CyberTipline reports to investigate, and they may already feel like they have enough on their plate. Still other Task Force affiliates may choose to unaffiliate, perceiving that the CyberTipline reports they were previously investigating will still get investigated at the Task Force, which further burdens the Task Force. Unaffiliating may also reduce fear of liability for failing to promptly investigate a report that would have led to the discovery of a child actively being abused, but the alternative is that the report may never be investigated at all.

[….]

This liability fear stems from a case where six months lapsed between the regional Task Force receiving NCMEC’s report and the city’s police department arresting a suspect (the abused children’s foster parent). In the interim, neither of the law enforcement agencies notified child protective services about the abuse as required by state law. The resulting lawsuit against the two police departments and the state was settled for $10.5 million. Rather than face expensive liability for failing to prioritize CyberTipline reports ahead of all other open cases, even homicide or missing children, the agency might instead opt to unaffiliate from the ICAC Task Force.

This is… infuriating. Cops choosing to not affiliate (i.e., get the necessary training to help) or removing themselves from an ICAC task force because they’re afraid if they don’t help save kids from abuse that they might get sued is ridiculous. It’s yet another example of cops running away, rather than doing the job they’re supposed to be doing, but which they claim they have no obligation to do.

That’s just one problem of many in the report, which we’ll get into in the second post. But, on the whole, it seems pretty clear that with the incentives this out of whack, something like KOSA or STOP CSAM aren’t going to be of much help. Actually tackling the underlying issues, the funding, the technology, and (most of all) the incentive structures, is necessary.

Filed Under: csam, cybertipline, icac, incentives, kosa, law enforcement, liability, stop csam
Companies: ncmec

Head Of Federal Prosecutors’ Association Latest To Ask For Broken Encryption

from the short-term-gains,-long-term-pain dept

After hearing consecutive FBI directors (James Comey, Chris Wray) drone on and on about how device and communication encryption are nudging us ever closer to the criminal apocalypse, it’s kind of refreshing to hear from someone else equally as misguided.

An op-ed written by Steven Wasserman recently appeared at The Hill. Wasserman opens his piece by suggesting a recent Senate Committee hearing about the prevalence of CSAM (child sexual abuse material) contained a disturbing lack of anti-encryption agitprop.

That’s why Wasserman is here, apparently. The head of the National Association of Assistant US Attorneys starts with a slight hat tip towards encryption’s security advantages before hinting that no one should be allowed to implement it.

Social media companies are increasingly adopting end-to-end encryption to provide better privacy protection to users. Indeed, law enforcement has praised end-to-end encryption to prevent identity fraud and theft. But too often the technology translates to a complete lock-out of law enforcement.

A company can effectively block law enforcement from obtaining evidence, even by court-ordered wiretap or search warrant. This obscures law enforcement’s ability to prevent and prosecute crimes — particularly those crimes that are frequently committed on social media and messaging apps, such as child sexual exploitation, terrorism-related offenses and drug trafficking.

Wasserman then quotes the single tech company exec to speak out against encryption during the Senate CSAM hearing: Discord CEO Jason Citron, who stated his platform would not be deploying end-to-end encryption because so many of its users are minors.

“We don’t believe we can fulfill our safety obligations if the text messages of teens are fully encrypted because encryption would block our ability to investigate a serious situation, and when appropriate report to law enforcement.”

Like lots of things, encryption is a personal choice. Some messaging services offer it. Some don’t. Users are free to utilize the services they want to. Plenty of other information is collected and retained by tech companies, even those deploying end-to-end encryption, so law enforcement still has plenty of options when it comes to hunting down criminals.

Of course, the head of the prosecutors’ association claims there’s really only one option: approaching tech companies directly to force them to hand over communications and data. All other options are ignored.

Previously, Meta had been one of the largest reporters of CSAM material online. But like Google and Apple, who previously routinely provided law enforcement access to data on mobile phones when under a court-ordered search warrant, the move toward encryption has locked out law enforcement.

Meta’s response? According to its safety strategy, “[L]aw enforcement may still be able to obtain this content directly from users or their devices.” Essentially, Meta expects law enforcement to request data on criminal activity from the criminals themselves.

Um. So? I mean, that’s how law enforcement has traditionally worked. While it may be able to obtain some records from third parties, the most incriminating evidence is often found in suspects’ homes, cars, or other personal effects. Just because it’s easier to gather communications in bulk from a service provider doesn’t mean all other options are too burdensome to consider.

This argument makes about as much sense as claiming county offices should allow cops to search private residences because the office holding the property records was served with a warrant. That cops are forced to seek warrants targeting properties directly is just more evidence the county office would rather protect criminals than ask cops to do their jobs.

Device encryption can be broken. End-to-end encrypted communications can be shared with law enforcement by anyone included in the conversation. Backups of images and other messaging detritus are often backed up to cloud servers whose contents aren’t encrypted. Informants and undercover investigators are often able to infiltrate conversations otherwise invisible to the outside world.

Wasserman’s summary paragraph is every bit as disconnected from this reality as the rest of his op-ed.

Congress must stand up to big tech’s efforts to lock out law enforcement. Companies cannot take profits without taking accountability, and these tech companies must be held responsible for the technology they create and the expanded opportunities for crime activity that has blossomed as a result.

LOL. The accountability of cops and prosecutors is often inversely proportional to the amount of power they have. The same goes for Congress. To pretend it’s only the private sector that’s capable of outpacing accountability is ridiculous.

Fortunately, Ryan Polk — the senior policy adviser at the Internet Societywas allowed to publish a response to Wasserman’s handwringing.

In an op-ed on Feb. 22, Steven Wasserman, the national president of the National Association of Assistant US Attorneys (NAAUSA), called on Congress to force tech companies to give foreign adversaries and criminals access to the encrypted data of American citizens.

While of course Wasserman did not phrase it that way, the result would be exactly that. There is no way to “provide law enforcement access to encrypted data” without creating security vulnerabilities that will leave everyone less safe.

Which is exactly the way these arguments should be phrased by those seeking to undermine encryption. Even the UK government — which has long-desired broken encryption — was told by its own advisory board that creating encryption backdoors was far more likely to harm children than protect them.

But those arguing against encryption are never intellectually (or factually!) honest. Instead, they talk about CSAM and terrorism while ignoring the persistent threat weakened encryption would create — something that would affect the millions of Americans who have never once broken a law (at least not deliberately…).

And then there’s this bit of irony: the only people likely to be affected by encryption bans or encryption weakening are the people law enforcement isn’t actively trying to track down.

Undermining end-to-end encryption in the United States will not make it suddenly go away. Criminals will simply move to options from jurisdictions outside the United States, or even develop their own end-to-end encrypted systems. Non-criminals are unlikely to do so, creating a situation where criminals continue to enjoy the benefits of end-to-end encryption while their victims cannot.

This is actually how the terrorists will win. Not because Meta is slapping E2EE on its messaging platform, but because everyone in areas where encryption is undermined won’t have the same communication protections the terrorists do.

Polk closes out his response by calling on Congress to ignore alarmists like Wasserman and continue to protect encryption. Fortunately, despite there being plenty of law-and-order types roaming Capitol Hill at the moment, very few are actually willing to seriously consider legislation that would create backdoors or otherwise undermine encryption. But all it takes is one tragedy worth exploiting to pave over this impasse. And when that happens, hopefully we’ll still have enough people in office capable of seeing through the weak arguments made by opponents of protective measures that protect everyone, rather than just those in power.

Filed Under: csam, encryption, privacy, prosecutors, security, steven wasserman

Once Again, Ron Wyden Had To Stop Bad “Protect The Children” Internet Bills From Moving Forward

from the saving-the-internet dept

Senator Ron Wyden is a one-man defense for preventing horrible bills from moving forward in the Senate. Last month, he stopped Josh Hawley from moving a very problematic STOP CSAM bill from moving forward, and now he’s had to do it again.

A (bipartisan) group of senators traipsed to the Senate floor Wednesday evening. They tried to skip the line and quickly move some bad bills forward by asking for unanimous consent. Unless someone’s there to object, it effectively moves the bill forward, ending committee debate about it. Traditionally, this process is used for moving non-controversial bills, but lately it’s been used to grandstand about stupid bills.

Senator Lindsey Graham announced his intention to pull this kind of stunt on bills that he pretends are about “protecting the children” but which do no such thing in reality. Instead of it being just him, he rounded up a bunch of senators and they all pulled out the usual moral panic lines about two terrible bills: EARN IT and STOP CSAM. Both bills are designed to make it sound like good ideas and about protecting children, but the devil is very much in the detail, as both bills undermine end-to-end encryption while assuming that if you just put liability on websites, they’ll magically make child predators disappear.

And while both bills pretend not to attack encryption — and include some language about how they’re not intended to do so — both of them leave open the possibility that the use of end-to-end encryption will be used as evidence against websites for bad things done on those websites.

But, of course, as is the standard for the group of grandstanding senators, they present these bills as (1) perfect and (2) necessary to “protect the children.” The problem is that the bills are actually (1) ridiculously problematic and (2) will actually help bad people online in making end-to-end encryption a liability.

The bit of political theater kicked off with Graham having Senators Grassley, Cornyn, Durbin, Klobuchar, and Hawley talk on and on about the poor kids online. Notably, none of them really talked about how their bills worked (because that would reveal how the bills don’t really do what they pretend they do). Durbin whined about Section 230, misleadingly and mistakenly blaming it for the fact that bad people exist. Hawley did the thing that he loves doing, in which he does his mock “I’m a big bad Senator taking on those evil tech companies” schtick, while flat out lying about reality.

But Graham closed it out with the most misleading bit of all:

In 2024, here’s the state of play: the largest companies in America — social media outlets that make hundreds of billions of dollars a year — you can’t sue if they do damage to your family by using their product because of Section 230

This is a lie. It’s a flat out lie and Senator Graham and his staffers know this. All Section 230 says is that if there is content on these sites that violate the law, the liability goes after whoever created the content. If the features of the site itself “do damage,” then you can absolutely sue the company. But no one is actually complaining about the features. They’re complaining about content. And the liability on the content has to go to who created it.

The problem here is that Graham and all the other senators want to hold companies liable for the speech of users. And that is a very, very bad idea.

Now these platforms enrich our lives, but they destroy our lives.

These platforms are being used to bully children to death.

They’re being used to take sexual images and voluntarily and voluntarily obtain and sending them to the entire world. And there’s not a damn thing you can do about it. We had a lady come before the committee, a mother saying that her daughter was on a social media site that had an anti-bullying provisions. They complained three times about what was happening to her daughter. She killed herself. They went to court. They got kicked out by section 230.

I don’t know the details of this particular case, but first off, the platforms didn’t bully anyone. Other people did. Put the blame on the people actually causing the harm. Separately, and importantly, you can’t blame someone’s suicide on someone else when no one knows the real reasons. Otherwise, you actually encourage increased suicides, as it gives people an ultimate way to “get back” at someone.

Senator Wyden got up and, as he did last month, made it quite clear that we need to stop child sexual abuse and predators. He talked about his bill, which would actually help on these issues by giving law enforcement the resources it needs to go after the criminals, rather than the idea of the bills being pushed that simply blame social media companies for not magically making bad people disappear.

We’re talking about criminal issues, and Senator Wyden is looking to handle it by empowering law enforcement to deal with the criminals. Senators Graham, Durbin, Grassley, Cornyn, Klobuchar, and Hawley are looking to sue tech companies for not magically stopping criminals. One of those approaches makes sense for dealing with criminal activity. And yet it’s the other one that a bunch of senators have lined up behind.

And, of course, beyond the dangerous approach of EARN IT, it inherently undermines encryption, which makes kids (and everyone) less safe, as Wyden also pointed out.

Now, the specific reason I oppose EARN It is it will weaken the single strongest technology that protects children and families online. Something known as strong encryption.

It’s going to make it easier to punish sites that use encryption to secure private conversations and personal devices. This bill is designed to pressure communications and technology companies to scan users messages.

I, for one, don’t find that a particularly comforting idea.

Now, the sponsors of the bill have argued — and Senator Graham’s right, we’ve been talking about this a while — that their bills don’t harm encryption. And yet the bills allow courts to punish companies that offer strong encryption.

In fact, while it includes some they language about protecting encryption, it explicitly allows encryption to be used as evidence for various forms of liability. Prosecutors are going to be quick to argue that deploying encryption was evidence of a company’s negligence preventing the distribution of CSAM, for example.

The bill is also designed to encourage scanning of content on users phones or computers before information is sent over the Internet which has the same consequences as breaking encryption. That’s why a hundred civil society groups including the American Library Association — people then I think all of us have worked for — Human Rights Campaign, the list goes… Restore the Fourth. All of them oppose this bill because of its impact on essential security.

Weakening encryption is the single biggest gift you can give to these predators and these god-awful people who want to stalk and spy on kids. Sexual predators are gonna have a far easier time stealing photographs of kids, tracking their phones, and spying on their private messages once encryption is breached. It is very ironic that a bill that’s supposed to make kids safer would have the effect of threatening the privacy and security of all law-abiding Americans.

My alternative — and I want to be clear about this because I think Senator Graham has been sincere about saying that this is a horrible problem involving kids. We have a disagreement on the remedy. That’s what is at issue.

And what I want us to do is to focus our energy on giving law enforcement officials the tools they need to find and prosecute these monstrous criminals responsible for exploiting kids and spreading vile abuse materials online.

That can help prevent kids from becoming victims in the first place. So I have introduced to do this: the Invest in Child Safety Act to direct five billion dollars to do three specific things to deal with this very urgent problem.

Graham then gets up to respond and lies through his teeth:

There’s nothing in this bill about encryption. We say that this is not an encryption bill. The bill as written explicitly prohibits courts from treating encryption as an independent basis for liability.

We’re agnostic about that.

That’s not true. As Wyden said, the bill has some hand-wavey language about not treating encryption as an independent basis for liability, but it does explicitly allow for encryption to be one of the factors that can be used to show negligence by a platform, as long as you combine it with other factors.

Section (7)(A) is the hand-wavey bit saying you can’t use encryption as “an independent basis” to determine liability, but (7)(B) effectively wipes that out by saying nothing in that section about encryption “shall be construed to prohibit a court from considering evidence of actions or circumstances described in that subparagraph.” In other words, you just have to add a bit more, and then can say “and also, look, they use encryption!”

And another author of the bill, Senator Blumenthal, has flat out said that EARN IT is deliberately written to target encryption. He falsely claims that companies would “use encryption… as a ‘get out of jail free’ card.” So, Graham is lying when he says encryption isn’t a target of the bill. One of his co-authors on the bill admits otherwise.

Graham went on:

What we’re trying to do is hold these companies accountable by making sure they engage in best business practices. The EARN IT acts simply says for you to have liability protections, you have to prove that you’ve tried to protect children. You have to earn it. You’re just not given to you. You have to have the best business practices in place that voluntary commissions that lay out what would be the best way to harden these sites against sexually exploitation. If you do those things you get liability, it’s just not given to you forever. So this is not about encryption.

As to your idea. I’d love to talk to you about it. Let’s vote on both, but the bottom line here is there’s always a reason not to do anything that holds these people liable. That’s the bottom line. They’ll never agree to any bill that allows you to get them in court ever. If you’re waiting on these companies to give this body permission for the average person to sue you. It ain’t never going to happen.

So… all of that is wrong. First of all, the very original version of the EARN IT Act did have provisions to make company’s “earn” 230 protections by following best practices, but that’s been out of the bill for ages. The current version has no such thing.

The bill does set up a commission to create best practices, but (unlike the earlier versions of the bill) those best practice recommendations have no legal force or requirements. And there’s nothing in the bill that says if you follow them you get 230 protections, and if you don’t, you don’t.

Does Senator Graham even know which version of the bill he’s talking about?

Instead, the bill outright modifies Section 230 (before the Commission even researches best practices) and says that people can sue tech companies for the distribution of CSAM. This includes using the offering of encryption as evidence to support the claims that CSAM distribution was done because of “reckless” behavior by a platform.

Either Senator Graham doesn’t know what bill he’s talking about (even though it’s his own bill) or he doesn’t remember that he changed the bill to do something different than it used to try to do.

It’s ridiculous that Senator Wyden remains the only senator who sees this issue clearly and is willing to stand up and say so. He’s the only one who seems willing to block the bad bills while at the same time offering a bill that actually targets the criminals.

Filed Under: amy klobuchar, chuck grassley, csam, dick durbin, earn it, encryption, invest in child safety, john cornyn, josh hawley, lindsey graham, ron wyden, shield act, stop csam, unanimous consent

Content Moderation At Scale Is Impossible To Do Well, Child Safety Edition

from the child-safety-isn't-just-about-flipping-a-switch dept

Last week, as you likely heard, the Senate had a big hearing on “child safety” where they grandstanded in front of a semi-random collection of tech CEOs, with zero interest in actually learning about the actual challenges of child safety online, or what the companies had done that worked, or where they might need help. The companies, of course, insisted they were working hard on the problem, and the Senators could just keep shouting “not enough,” without getting into any of the details.

But, of course, the reality is that this isn’t an easy problem to solve. At all. I’ve talked about Masnick’s Impossibility Theorem over the years, that content moderation is impossible to do well at scale, and that applies to child safety material as well.

Part of the problem is that much of it is a demand-side problem, not a supply side problem. If people are demanding certain types of content, they will go to great lengths to get it, and that means doing what they can to hide from the platforms trying to stop them. We’ve talked about this in the context of eating disorder content. Multiple studies found that as sites tried to crack down on that content, it didn’t work, because users demanded it. So they would keep coming up with new ways to talk about the content that the site kept trying to block. So, there’s always the demand side part of the equation to keep in mind.

But also, there are all sorts of false positives, where content is declared to violate child safety policies, when it clearly doesn’t. Indeed, the day after the hearing I saw two examples of social media sites blocking content which they claimed were child sexual abuse material, when it is clear that neither one actually was.

The first came from Alex Macgillivray, former General Counsel at Twitter and former deputy CTO for the US government. He was using Meta’s Threads app, and wanted to see what people thought of a recent article in the NY Times raising concerns about AI generated CSAM. But, when he searched for the URL of the article, which contains the string “ai-child-sex-abuse,” Meta warned him that he was violating its policies:

Image

In response to his search on the NY Times URL, Threads popped up a message saying:

Child sexual abuse is illegal

We think that your search might be associated with child sexual abuse. Child sexual abuse or viewing sexual imagery of children can lead to imprisonment and other severe personal consequences. This abuse causes extreme harm to children and searching and viewing such material adds to that harm. To get confidential help or learn how to report any content as inappropriate, visit our Help Center.

So, first off, this does show that Meta, obviously, is trying to prevent people from finding such material (contrary to what various Senators have claimed), but it also shows that false positives are a very real issue.

The second example comes from Bluesky, which is a much smaller platform, and has been (misleadingly…) accused of not caring about trust and safety issues over its approximate one year since opening up as a private beta. There, journalist Helen Kennedy said she tried post about the ridiculous situation in which the group Moms For Liberty were apparently scandalized by the classic children’s book “The Night Kitchen” by Maurice Sendak, which includes some drawings of a naked child in a very non-sexual manner.

Apparently, Moms For Liberty has been drawing underpants on the protagonist of that book. Kennedy tried to post side by side images of the kid with underpants and the original drawing… and got dinged by Bluesky’s content moderators.

Image

Again, there, the moderation effort falsely claims that Kennedy was trying to post “underage nudity or sexual content, which is in violation of our Community Guidelines.”

And, immediately, you might spot the issue. This is posting “underage nudity,” but it is clearly not sexual in nature, nor is it sexual abuse material. This is one of those “speed run” lessons that all trust and safety teams learn eventually. Facebook dealt with the same issue when it banned the famous Terror of War photo, sometimes known as “the “Napalm Girl” photo taken during the Vietnam War.

Obviously, it’s good that companies are taking this issue seriously, and trying to stop the distribution of CSAM. But one of the reasons why this is so difficult is that there are false positives like the two above. They happen all the time. And one of the issues in getting “stricter” about blocking content that your systems flag as CSAM, is that you get more such false positives, which doesn’t help anyone.

A useful and productive Senate hearing might have explored the actual challenges that the companies face in trying to stop CSAM. But we don’t have a Congress that is even remotely interested in useful and productive.

Filed Under: child safety, csam, false positives, masnick's impossibility theorem
Companies: bluesky, meta