csam – Techdirt (original) (raw)
Musk’s ‘Priority #1’ Disaster: CSAM Problem Worsens While ExTwitter Stiffs Detection Provider
from the not-such-a-priority-apparently dept
One of Elon Musk’s first “promises” upon taking over Twitter was that fighting child exploitation was “priority #1.”
He falsely implied that the former management didn’t take the issue seriously (they did) and insisted that he would make sure it was a solved problem on the platform he now owned. Of course, while he was saying this, he was also firing most of the team that worked on preventing the sharing of child sexual abuse material (CSAM) on the site. Almost every expert in the field noted that it seemed clear that Elon was almost certainly making the problem worse, not better. Some early research supported this, showing that the company was now leaving up a ton of known CSAM (the easiest kind to find and block through photo-matching tools).
A few months later, Elon’s supposed commitment to stomping out CSAM was proven laughable when he apparently personally stepped in to reinstate the account of a mindless conspiracy theorist who had posted a horrific CSAM image.
A new NBC News investigation now reveals just how spectacularly Musk has failed at his self-proclaimed “priority #1.” Not only has the CSAM problem on ExTwitter exploded beyond previous levels, but the company has now been cut off by Thorn—one of the most important providers of CSAM detection technology—after ExTwitter simply stopped paying its bills.
At the same time, Thorn, a California-based nonprofit organization that works with tech companies to provide technology that can detect and address child sexual abuse content, told NBC News that it had terminated its contract with X.
Thorn said that X stopped paying recent invoices for its work, though it declined to provide details about its deal with the company citing legal sensitivities. X said Wednesday that it was moving toward using its own technology to address the spread of child abuse material.
Let’s pause on this corporate-speak for a moment. ExTwitter claims it’s “moving toward using its own technology” to fight CSAM. That’s a fancy way of saying they fired the experts and plan to wing it with some other—likely Grok-powered— nonsense they can cobble together.
Now, to be fair, some platforms do develop effective in-house CSAM detection tools and while Thorn’s tools are widely used, some platforms have complained that the tools are limited. But these types of systems generally work best when operated by specialized third parties who can aggregate data across multiple platforms—exactly what organizations like Thorn (and Microsoft’s PhotoDNA) provide. The idea that a company currently failing to pay its bills to anti-CSAM specialists is simultaneously building superior replacement technology is, shall we say, optimistic.
The reality on the ground tells a very different story than Musk’s PR spin:
The Canadian Centre for Child Protection (C3P), an independent online CSAM watchdog group, reviewed several X accounts and hashtags flagged by NBC News that were promoting the sale of CSAM, and followed links promoted by several of the accounts. The organization said that, within minutes, it was able to identify accounts that posted images of previously identified CSAM victims who were as young as 7. It also found apparent images of CSAM in thumbnail previews populated on X and in links to Telegram channels where CSAM videos were posted. One such channel showed a video of a boy estimated to be as young as 4 being sexually assaulted. NBC News did not view or have in its possession any of the abuse material.
Lloyd Richardson, director of information technology at C3P, said the behavior being exhibited by the X users was “a bit old hat” at this point, and that X’s response “has been woefully insufficient.” “It seems to be a little bit of a game of Whac-A-Mole that goes on,” he said. “There doesn’t seem to be a particular push to really get to the root cause of the issue.”
NBC’s investigation found that Musk’s “priority #1” has become a free-for-all:
A review of many hashtags with terms known to be associated with CSAM shows that the problem is, if anything, worse than when Musk initially took over. What was previously a trickle of posts of fewer than a dozen per hour is now a torrent propelled by accounts that appear to be automated — some posting several times a minute.
Despite the continued flood of posts and sporadic bans of individual accounts, the hashtags observed by NBC News over several weeks remained open and viewable as of Wednesday. And some of the hashtags that were identified in 2023 by NBC News as hosting the child exploitation advertisements are still being used for the same purpose today.
That seems bad! Read it again: hashtags that were flagged as CSAM distribution channels in 2023 are still active and being used for the same purpose today. This isn’t the kind of mistake that happens when you’re overwhelmed by scale—this is what happens when you simply don’t give a shit.
Look, I’m usually willing to defend platforms against unfair criticism about content moderation. The scale makes perfection impossible, and edge cases are genuinely hard. But this isn’t about edge cases or the occasional mistake—this is about leaving up known, previously identified CSAM distribution channels. That’s not a content moderation failure; that’s a policy failure.
As the article also notes, ExTwitter tried to get praised for all the work it was doing with Thorn, in an effort to show how strongly it was fighting CSAM. This post from just last year looks absolutely ridiculous now that they stopped paying Thorn and the org had to cut them off.
But the real kicker comes from Thorn itself, which essentially confirms that ExTwitter was more interested in the PR value of their partnership than actually using the technology:
Pailes Halai, Thorn’s senior manager of accounts and partnerships, who oversaw the X contract, said that some of Thorn’s software was designed to address issues like those posed by the hashtag CSAM posts, but that it wasn’t clear if they ever fully implemented it.
“They took part in the beta with us last year,” he said. “So they helped us test and refine, etc, and essentially be an early adopter of the product. They then subsequently did move on to being a full customer of the product, but it’s not very clear to us at this point how and if they used it.”
So there you have it: ExTwitter signed up for anti-CSAM tools, used the partnership for good PR, then perhaps never bothered to fully implement the system, and finally stopped paying the bills entirely.
This is what “priority #1” looks like in Elon Musk’s world: lots of performative tweets, followed by firing the experts, cutting off the specialized tools, and letting the problem explode while pretending you’re building something better. I’m sure like “full self-driving” and Starships that don’t explode, the tech will be fully deployed any day now.
Filed Under: child safety, csam, elon musk, prevention
Companies: thorn, twitter, x
MAGA’s Sickening Hypocrisy: From ‘Save The Children’ To ‘Defund The Org That Actually Saves Children’
from the this-is-horrifying dept
After years of screaming “save the children” while baselessly accusing others of exploiting kids, the Trump administration is now trying to destroy the actual infrastructure that saves children. This one crosses from standard MAGA hypocrisy into genuinely evil territory.
I’m one of those people who doesn’t think you can (or should) call most people inherently “bad,” but if you support what the Trump administration is doing here, you are a bad person.
According to multiple reports, including former NCMEC board member Don McGowan and independent journalist Marisa Kabas (who has been breaking story after story lately), the Justice Department informed the National Center for Missing and Exploited Children that it will pull their entire funding if it doesn’t remove all references to LGBTQ+ issues and starts “deadnaming” trans kids. According to the Verge, NCMEC has already started complying and has removed at least three documents:
NCMEC’s website hosts numerous reports on the state of various child endangerment issues, including data about abduction, sex trafficking, and online enticement. However, comparisons with the Wayback Machine show that at least three documents on its “NCMEC Data” page — including a report on missing children with suicidal tendencies, a report on male victims of child sex trafficking, and an overall data analysis of children missing from care — have been removed since the page’s last archived date of January 24th. Archived copies of all three reports included mentions of LGBTQ+ and particularly transgender children. The 13 remaining publications on NCMEC’s data page do not appear to contain these references.
To be clear: NCMEC isn’t above criticism. Just months ago, we published a detailed interview with McGowan exposing serious problems with NCMEC’s board, including its history of kowtowing to Trump and its reluctance to protect trans kids due to its Trumpist board members.
But crucially: The frontline workers and systems at NCMEC — particularly the CyberTipline — operate as vital infrastructure for child protection, regardless of the board’s political failings.
The CyberTipline serves as the legally mandated clearinghouse for child sexual abuse material (CSAM) reports from online service providers, coordinating between platforms and law enforcement to investigate and combat child exploitation.
The CyberTipline isn’t perfect. Last year’s Stanford Internet Observatory report (published before Jim Jordan effectively killed the organization over bogus “censorship” claims) detailed significant challenges in the system. While Congress addressed some issues, like cloud storage restrictions, fundamental problems remain.
If NCMEC loses funding, we’re looking at the collapse of a legally mandated reporting system that processes millions of reports annually. Every tech platform, from the smallest startup to the largest social media giant, relies on this infrastructure to comply with federal law. Without it, we’d effectively create a massive regulatory black hole in online child protection.
And, in effect, this would create a world in which CSAM creators and sharers would have free rein, as the key bit of infrastructure in stopping them would be wiped out.
The CyberTipline’s current struggles stem largely from resource constraints. Increased funding for NCMEC and CSAM investigators would strengthen our child protection infrastructure.
So, of course, the Trump DOJ wants to kill the funding. Entirely.
And why? Because they’re so obsessed with what genitals anyone has (which is none of their fucking business) that they’re trying to wipe out the very idea of transgenderism existing.
And that’s especially damaging, because trans kids are disproportionately at risk of being exploited and abused. The statistics are stark: LGBTQ+ youth are three times more likely to experience unwanted and risky online interactions than their peers, and trans youth in particular face even higher rates of targeting by online predators. These aren’t just numbers – they’re exactly the kind of cases that NCMEC’s resources should be designed to prevent and address. Having resources for LGBTQ+ kids is just incredibly important to actually protect the children.
This is also why this story is so sickening. For a while now, the Trumpist disinformation peddling cultists have pushed the made up story that Democrats are a child abusing cult. Remember “Pizza Gate” bullshit? That morphed into the QAnon conspiracy theory and slogan of “save the children.” In practice, QAnon’s “save the children” campaign was really a “let’s accuse all Democrats of being pedophiles” campaign.
Yet, here, the Trump administration, which QAnon has long supported, is literally looking to pull funding from the one organization most responsible for actually protecting children… if it won’t throw a bunch of the kids it’s supposed to be protecting under the bus.
The consequences of this directive are stark and binary: either NCMEC loses its funding, effectively crippling our national CSAM reporting infrastructure, or it complies by abandoning vulnerable LGBTQ+ youth to increased exploitation risks. Either way, children will suffer real horrific levels of otherwise preventable harm — not the imagined threats of conspiracy theorists, but actual, documented dangers that NCMEC currently works to prevent.
This creates an impossible choice: either we lose the primary infrastructure for fighting online child exploitation, or we institutionalize discrimination that puts already-vulnerable children at even greater risk. Both outcomes achieve exactly the opposite of what any legitimate child protection effort should do.
Sickening.
This isn’t a policy dispute or a culture war skirmish. This is the deliberate dismantling of child protection infrastructure to score political points.
There are issues where reasonable people can disagree. This isn’t one of them. When you strip away the rhetoric and look at the actual consequences, this policy serves exactly one purpose: to harm children, whether through dismantling CSAM protections or forcing the abandonment of vulnerable LGBTQ+ youth.
To those Trump supporters showing up in our comments to gloat about how “I voted for this” — yes, you did. You voted to destroy the systems that actually protect children from exploitation. No amount of “save the children” hashtags can obscure that reality.
Filed Under: csam, cybertipline, donald trump, funding, lgbtq, maga, save the children
Companies: ncmec
Elon Musk’s “Sorry, Twitter No Longer Exists” Defense Falls Flat Down Under
from the you-can't-touch-me-because-my-name-is-different dept
You know how little kids sometimes play a game where they claim they’ve changed their name, and you can no longer blame them for what they did under their previous name? You know how that never actually works? Well, about that… Elon seems to be trying a corporate version of that trick in Australia, and it has been just as successful.
It’s no secret that this year, Elon’s ExTwitter has been fighting with Australia over demands to remove content, but there was a separate fight, going back over a year, in which the Australian eSafety Commissioner was disappointed with ExTwitter’s processes for handling child sexual abuse material (CSAM) on the platform.
If you don’t recall, a few weeks after taking over the site, Elon claimed that fighting CSAM was his “priority #1.”
And yet, he fired most of the trust and safety team, appeared to stop using industry-standard tools for finding/deleting known CSAM, and seemed to make the CSAM problem on ExTwitter much, much worse. That’s not even mentioning the time he reinstated an account that had shared an infamously horrid piece of CSAM because the poster was an Elon supporter.
Thus, a while back, the Australian eSafety Commissioner began an investigation into how the company was dealing with CSAM. Elon chose to not take it very seriously at all. From the eSafety Commissioner:
I assessed X Corp.’s response and identified 14 questions (many of which involved multiple sub-questions) where it failed to provide the information required by the Notice. In some instances X Corp. had failed to provide any response to the question, such as by leaving the boxes entirely blank. In other instances, X Corp. provided a response that was otherwise incomplete and/or inaccurate.
On 6 April 2023, my office sent follow-up questions to X Corp. to provide a further opportunity to provide the information required by the Notice. The correspondence stated that my office was seeking this information to assess whether X Corp. had complied with the Notice. 1 Service provider notification to X Corp. esafety.gov.au
On 5 May, X Corp. provided information in response to the follow-up questions. It is evident from many of X Corp.’s subsequent responses that it held information required by the Notice and was capable of providing that information at first instance.
Because of this, Australia fined ExTwitter $400k almost a year ago. After the fine was assessed, Elon fought back and appealed the ruling, continuing his standard “ignore first, fight it out in court later” approach to so many things.
He did so by claiming that the fine was for actions taken by “Twitter” (under his watch) but that “Twitter” no longer existed, because there was a different company called “X” that he now ran. So any demands for “Twitter” must be null and void, as X operated under a totally different set of laws.
Because Elon thinks he’s clever and that everyone else is very, very stupid.
To be clear, it’s a little more complicated here. Part of the argument was that since Twitter was a Delaware-based company, while X is a Nevada-based company, different laws apply under each state. But, the idea that this somehow absolved the company of having to deal with legal issues that began under the previous entity still seems like one of those tricks only a bratty schoolboy would try.
Turns out this didn’t work. I’m not sure what’s Australian for “not fucking impressed,” but I’d say that it applies to this judge.
A central feature of this proceeding is that, on 15 March 2023, Twitter Inc merged into X Corp. Upon that occurring, Twitter Inc ceased to exist. These facts were not in dispute, as they were the subject of a statement of agreed facts that was received into evidence….
[….]
To adopt the language of the Nevada statute, Twitter Inc was a constituent entity that merged into X Corp. It was only upon that occurrence that Twitter Inc ceased to exist, and it was only its separate existence that ceased. X Corp’s “status” is as the surviving entity of a statutory merger, in which Twitter Inc was a constituent entity that merged into X Corp, with all of the legal consequences that ensue….
Then there was a separate argument that ExTwitter had made, basically arguing that under Australia’s Regulatory Powers Act, ExTwitter didn’t have to respond because the notice it received “did not specify the place of the contraventions” because, again, it tried to pretend that “Twitter” and “X” were different companies.
This just comes across as another bit of gamesmanship.
And again, the judge wasn’t impressed, noting that the notice identified both companies:
In the present case, X Corp did not advance any persuasive basis on which to conclude that the failure of the infringement notice to identify the place of the contraventions could have prejudiced it. I have already mentioned that X Corp submitted that the location of the alleged contravention could have indicated which was the correct legal entity to which the notice should have been directed. But on X Corp’s own case, the location of the alleged contraventions had no relevance to this question, which was said to turn on the Online Safety Act itself, or the effects of the merger between Twitter Inc and X Corp, as provided for by Nevada law. And, as the Commissioner submitted, the infringement notice was addressed to X Corp and identified both Twitter Inc and X Corp as the relevant “provider”. It was not otherwise explained how X Corp was prejudiced by the fact that the notice did not identify where the failure to comply with s 57 of the Online Safety Act occurred. No prejudice, or even potential prejudice, is apparent. To the contrary, I accept the Commissioner’s submission that X Corp had everything it needed to know in order to consider the allegations made against it in the infringement notice.
I recognize that sometimes Elon gets away with his “cute” legal arguments. But maybe just following the law is better than trying to tap dance around it with obviously stupid rationales?
Filed Under: australia, csam, elon musk, esafety commissioner, name change
Companies: twitter, x
Durov’s Arrest Details Released, Leaving More Questions Than Answers
from the still-concerning dept
Is the arrest of Pavel Durov, founder of Telegram, a justified move to combat illegal activities, or is it a case of dangerous overreach that threatens privacy and free speech online? We had hoped that when French law enforcement released the details of the charges we’d have a better picture of what happened. Instead, we’re actually just left with more questions and concerns.
Earlier today we wrote about the arrest and how it already raised a lot of questions that didn’t have easy answers. Soon after that post went out, the Tribunal Judiciaire de Paris released a press release with some more details about the investigation (in both French and English). All it does is leave most of the questions open, which might suggest they don’t have very good answers.
First, the report notes “the context of the judicial investigation” which may be different from what he is eventually charged with, though the issues are listed as “charges.”
I would bucket the list of charges into four categories, each of which raise concerns. If I had to put these in order of greatest concern to least, it would be as follows:
- Stuff about encryption. The last three charges are all variations on “providing a cryptology service/tool” without some sort of “prior declaration” or “certified declaration.” Apparently, France (like some other countries) has certain import/export controls on encryption. It appears they’re accusing Durov of violating those by not going through the official registration process. But, here, it’s hard not to see that as totally pretextual: an excuse to arrest Durov over other stuff they don’t like him doing.
- “Complicity” around a failure to moderate illegal materials. There are a number of charges around this. Complicity to “enable illegal transactions” for “possessing” and “distributing” CSAM, for selling illegal drugs, hacking tools, and organized fraud. But what is the standard for “complicity” here? This is where it gets worrisome. If it’s just a failure to proactively moderate, that seems very problematic. If it’s ignoring direct reports of illegal behavior, then it may be understandable. If it’s more directly and knowingly assisting criminal behavior, then things get more serious. But the lack of details here make me worry it’s the earlier options.
- Refusal to cooperate with law enforcement demands for info: This follows on from my final point in number two. There’s a suggestion in the charges (the second one) that Telegram potentially ignored demands from law enforcement. It says there was a “refusal to communicate, at the request of competent authorities, information or documents necessary for carrying out and operating interceptions allowed by law.” This could be about encryption, and a refusal to provide info they didn’t have, or about not putting in a backdoor. If it’s either of those, that would be very concerning. However, if it’s just “they didn’t respond to lawful subpoenas/warrants/etc.” that… could be something that’s more legitimate.
- Finally, money laundering. Again, this one is a bit unclear, but it says “laundering of the proceeds derived from organized group’s offences and crimes.” It’s difficult to know how serious any of this is, as that could represent something legitimate, or it could be French law enforcement saying “and they profited off all of this!” We’ve seen charges in other contexts where the laundering claims are kind of thrown in. Details could really matter here.
In the end, though, a lot of this does seem potentially very problematic. So far, there’s been no revelation of anything that makes me say “oh, well, that seems obviously illegal.” A lot of the things listed in the charge sheet are things that lots of websites and communications providers could be said to have done themselves, though perhaps to a different degree.
So we still don’t really have enough details to know if this is a ridiculous arrest, but it does seem to be trending towards that so far. Yes, some will argue that Durov somehow “deserves” this for hosting bad content, but it’s way more complicated than that.
I know from the report that Stanford put out earlier this year that Telegram does not report CSAM to NCMEC at all. That is very stupid. I would imagine Telegram would argue that as a non-US company it doesn’t have to abide by such laws. These charges are in France rather than the US, but it still seems bad that the company does not report any CSAM to the generally agreed-upon organization that handles such reports, and to which companies operating in the US have a legal requirement to report.
But, again, there are serious questions about where you draw these lines. CSAM is content that is outright illegal. But some other stuff may just be material that some people dislike. If the investigation is focused just on the outright illegal content that’s one thing. If it’s not, then this starts to look worse.
On top of that, as always, are the intermediary liability questions, where the question should be how much responsibility a platform has for its users’ use of the system. The list of “complicity” in various bad things worries me because every platform has some element of that kind of content going on, in part because it’s impossible to stop entirely.
And, finally, as I mentioned earlier today, it still feels like many of these issues would normally be worthy of a civil procedure, perhaps by the EU, rather than a criminal procedure by a local court in France.
So in the end, while it’s useful to see the details of this investigation, and it makes me lean ever so slightly in the direction of thinking these potential charges go too far, we’re still really missing many of the details. Nothing released today has calmed the concerns that this is overreach, but nothing has made it clear that it definitely is overreach either.
Filed Under: complicity, content moderation, csam, encryption, france, law enforcement, pavel durov
Companies: telegram
Suing Apple To Force It To Scan iCloud For CSAM Is A Catastrophically Bad Idea
from the this-would-make-it-harder-to-catch-criminals dept
There’s a new lawsuit in Northern California federal court that seeks to improve child safety online but could end up backfiring badly if it gets the remedy it seeks. While the plaintiff’s attorneys surely mean well, they don’t seem to understand that they’re playing with fire.
The complaint in the putative class action asserts that Apple has chosen not to invest in preventive measures to keep its iCloud service from being used to store child sex abuse material (CSAM) while cynically rationalizing the choice as pro-privacy. This decision allegedly harmed the Jane Doe plaintiff, a child whom two unknown users contacted on Snapchat to ask for her iCloud ID. They then sent her CSAM over iMessage and got her to create and send them back CSAM of herself. Those iMessage exchanges went undetected, the lawsuit says, because Apple elected not to employ available CSAM detection tools, thus knowingly letting iCloud become “a safe haven for CSAM offenders.” The complaint asserts claims for violations of federal sex trafficking law, two states’ consumer protection laws, and various torts including negligence and products liability.
Here are key passages from the complaint:
[Apple] opts not to adopt industry standards for CSAM detection… [T]his lawsuit … demands that Apple invest in and deploy means to comprehensively … guarantee the safety of children users. … [D]espite knowing that CSAM is proliferating on iCloud, Apple has “chosen not to know” that this is happening … [Apple] does not … scan for CSAM in iCloud. … Even when CSAM solutions … like PhotoDNA[] exist, Apple has chosen not to adopt them. … Apple does not proactively scan its products or services, including storages [sic] or communications, to assist law enforcement to stop child exploitation. …
According to [its] privacy policy, Apple had stated to users that it would screen and scan content to root out child sexual exploitation material. … Apple announced a CSAM scanning tool, dubbed NeuralHash, that would scan images stored on users’ iCloud accounts for CSAM … [but soon] Apple abandoned its CSAM scanning project … it chose to abandon the development of the iCloud CSAM scanning feature … Apple’s Choice Not to Employ CSAM Detection … Is a Business Choice that Apple Made. … Apple … can easily scan for illegal content like CSAM, but Apple chooses not to do so. … Upon information and belief, Apple … allows itself permission to screen or scan content for CSAM content, but has failed to take action to detect and report CSAM on iCloud. …
[Questions presented by this case] include: … whether Defendant has performed its duty to detect and report CSAM to NCMEC [the National Center for Missing and Exploited Children]. … Apple … knew or should have known that it did not have safeguards in place to protect children and minors from CSAM. … Due to Apple’s business and design choices with respect to iCloud, the service has become a go-to destination for … CSAM, resulting in harm for many minors and children [for which Apple should be held strictly liable] … Apple is also liable … for selling defectively designed services. … Apple owed a duty of care … to not violate laws prohibiting the distribution of CSAM and to exercise reasonable care to prevent foreseeable and known harms from CSAM distribution. Apple breached this duty by providing defective[ly] designed services … that render minimal protection from the known harms of CSAM distribution. …
Plaintiff [and the putative class] … pray for judgment against the Defendant as follows: … For [an order] granting declaratory and injunctive relief to Plaintiff as permitted by law or equity, including: Enjoining Defendant from continuing the unlawful practices as set forth herein, until Apple consents under this court’s order to … [a]dopt measures to protect children against the storage and distribution of CSAM on the iCloud … [and] [c]omply with quarterly third-party monitoring to ensure that the iCloud product has reasonably safe and easily accessible mechanisms to combat CSAM….”
What this boils down to: Apple could scan iCloud for CSAM, and has said in the past that it would and that it does, but in reality it chooses not to. The failure to scan is a wrongful act for which Apple should be held liable. Apple has a legal duty to scan iCloud for CSAM, and the court should make Apple start doing so.
This theory is perilously wrong.
The Doe plaintiff’s story is heartbreaking, and it’s true that Apple has long drawn criticism for its approach to balancing multiple values such as privacy, security, child safety, and usability. It is understandable to assume that the answer is for the government, in the form of a court order, to force Apple to strike that balance differently. After all, that is how American society frequently remedies alleged shortcomings in corporate practices.
But this isn’t a case about antitrust, or faulty smartphone audio, or virtual casino apps (as in other recent Apple class actions). Demanding that a court force Apple to change its practices is uniquely infeasible, indeed dangerous, when it comes to detecting illegal material its users store on its services. That’s because this demand presents constitutional issues that other consumer protection matters don’t. Thanks to the Fourth Amendment, the courts cannot force Apple to start scanning iCloud for CSAM; even pressuring it to do so is risky. Compelling the scans would, perversely, make it way harder to convict whoever the scans caught. That’s what makes this lawsuit a catastrophically bad idea.
(The unconstitutional remedy it requests isn’t all that’s wrong with this complaint, mind. Let’s not get into the Section 230 issues it waves away in two conclusory sentences. Or how it mistakes language in Apple’s privacy policy that it “may” use users’ personal information for purposes including CSAM scanning, for an enforceable promise that Apple would do that. Or its disingenuous claim that this isn’t an attack on end-to-end encryption. Or the factually incorrect allegation that “Apple does not proactively scan its products or services” for CSAM at all, when in fact it does for some products. Let’s set all of that aside. For now.)
The Fourth Amendment to the U.S. Constitution protects Americans from unreasonable searches and seizures of our stuff, including our digital devices and files. “Reasonable” generally means there’s a warrant for the search. If a search is unreasonable, the usual remedy is what’s called the exclusionary rule: any evidence turned up through the unconstitutional search can’t be used in court against the person whose rights were violated.
While the Fourth Amendment applies only to the government and not to private actors, the government can’t use a private actor to carry out a search it couldn’t constitutionally do itself. If the government compels or pressures a private actor to search, or the private actor searches primarily to serve the government’s interests rather than its own, then the private actor counts as a government agent for purposes of the search, which must then abide by the Fourth Amendment, otherwise the remedy is exclusion.
If the government – legislative, executive, or judiciary – forces a cloud storage provider to scan users’ files for CSAM, that makes the provider a government agent, meaning the scans require a warrant, which a cloud services company has no power to get, making those scans unconstitutional searches. Any CSAM they find (plus any other downstream evidence stemming from the initial unlawful scan) will probably get excluded, but it’s hard to convict people for CSAM without using the CSAM as evidence, making acquittals likelier. Which defeats the purpose of compelling the scans in the first place.
Congress knows this. That’s why, in the federal statute requiring providers to report CSAM to NCMEC when they find it on their services, there’s an express disclaimer that the law does not mean they must affirmatively search for CSAM. Providers of online services may choose to look for CSAM, and if they find it, they have to report it – but they cannot be forced to look.
Now do you see the problem with the Jane Doe lawsuit against Apple?
This isn’t a novel issue. Techdirt has covered it before. It’s all laid out in a terrific 2021 paper by Jeff Kosseff. I have also discussed this exact topic over and over and over and over and over and over again. As my latest publication (based on interviews with dozens of people) describes, all the stakeholders involved in combating online CSAM – tech companies, law enforcement, prosecutors, NCMEC, etc. – are excruciatingly aware of the “government agent” dilemma, and they all take great care to stay very far away from potentially crossing that constitutional line. Everyone scrupulously preserves the voluntary, independent nature of online platforms’ decisions about whether and how to search for CSAM.
And now here comes this lawsuit like the proverbial bull in a china shop, inviting a federal court to destroy that carefully maintained and exceedingly fragile dynamic. The complaint sneers at Apple’s “business choice” as a wrongful act to be judicially reversed rather than something absolutely crucial to respect.
Fourth Amendment government agency doctrine is well-established, and there are numerous cases applying it in the context of platforms’ CSAM detection practices. Yet Jane Doe’s counsel don’t appear to know the law. For one, their complaint claims that “Apple does not proactively scan its products or services … to assist law enforcement to stop child exploitation.” Scanning to serve law enforcement’s interests would make Apple a government agent. Similarly, the complaint claims Apple “has failed to take action to detect and report CSAM on iCloud,” and asks “whether Defendant has performed its duty to detect and report CSAM to NCMEC.” This conflates two critically distinct actions. Apple does not and cannot have any duty to detect CSAM, as expressly stated in the statute imposing a duty to report CSAM. It’s like these lawyers didn’t even read the entire statute, much less any of the Fourth Amendment jurisprudence that squarely applies to their case.
Any competent plaintiff’s counsel should have figured this out before filing a lawsuit asking a federal court to make Apple start scanning iCloud for CSAM, thereby making Apple a government agent, thereby turning the compelled iCloud scans into unconstitutional searches, thereby making it likelier for any iCloud user who gets caught to walk free, thereby shooting themselves in the foot, doing a disservice to their client, making the situation worse than the status quo, and causing a major setback in the fight for child safety online.
The reason nobody’s filed a lawsuit like this against Apple to date, despite years of complaints from left, right, and center about Apple’s ostensibly lackadaisical approach to CSAM detection in iCloud, isn’t because nobody’s thought of it before. It’s because they thought of it and they did their fucking legal research first. And then they backed away slowly from the computer, grateful to have narrowly avoided turning themselves into useful idiots for pedophiles. But now these lawyers have apparently decided to volunteer as tribute. If their gambit backfires, they’ll be the ones responsible for the consequences.
Riana Pfefferkorn is a policy fellow at Stanford HAI who has written extensively about the Fourth Amendment’s application to online child safety efforts.
Filed Under: 4th amendment, class action, csam, evidence, proactive scanning, scanning
Companies: apple
Techdirt Podcast Episode 390: The Challenges Facing NCMEC’s CyberTipline
from the looking-closer dept
The National Center for Missing & Exploited Children‘s CyberTipline is a central component of the fight against child sexual abuse material (CSAM) online, but there have been a lot of questions about how well it truly works. A recent report from the Stanford Internet Observatory, which we’ve published two recent posts about, provides an extremely useful window into the system. This week, we’re joined by two of the report’s authors, Shelby Grossman and Riana Pfefferkorn, to dig into the content of the report and the light it sheds on the challenges faced by the CyberTipline.
Follow the Techdirt Podcast on Soundcloud, subscribe via Apple Podcasts or Spotify, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.
Filed Under: csam, cybertipline, riana pfefferkorn, shelby grossman
Companies: ncmec
The REPORT Act: Enhancing Online Child Safety Without the Usual Congressional Nonsense
from the a-good-bill?-for-the-children? dept
For years and years, Congress has been pushing a parade of horrible “protect the children online” bills that seem to somehow get progressively worse each time. I’m not going through the entire list of them, because it’s virtually endless.
One of the most frustrating things about those bills, and the pomp and circumstance around them, is that it ignores the simpler, more direct things that Congress could do that would actually help.
Just last week, we wrote about the Stanford Internet Observatory’s big report on the challenges facing the CyberTipline, run by the National Center for Missing & Exploited Children (NCMEC). We wrote two separate posts about the report (and also discussed it on the latest episode of our new podcast, Ctrl-Alt-Speech) because there was so much useful information in there. As we noted, there are real challenges in making the reporting of child sexual abuse material (CSAM) work better, and it’s not because people don’t want to help. It’s actually because of a set of complex issues that are not easily solvable (read the report or my articles for more details).
But there were still a few clear steps that could be taken by Congress to help.
This week, the REPORT Act passed Congress, and it includes… a bunch of those straightforward, common sense things that should help improve the CyberTipline process. The key bit is allowing the CyberTipline to modernize a bit, including allowing it to use cloud storage. To date, no cloud storage vendors could work with NCMEC, out of a fear that they’d face criminal liability for “hosting CSAM.”
This bill fixes that, and should enable NCMEC to make use of some better tools and systems, including better classifiers, which are becoming increasingly important.
There are also some other factors around letting victims and parents of victims report CSAM involving the child directly to NCMEC, which can be immensely helpful in trying to stop the spread of some content (and on focusing some law enforcement responses).
There are also some technical fixes that require platforms to retain certain records for a longer period of time. This was another important point that was highlighted in the Stanford report. Given the flow of information and prioritization, sometimes by the time law enforcement realized it should get a warrant to get more info from a platform, the platform would have already deleted it as required under existing law. Now that time period is extended to give law enforcement a bit more time.
The one bit that we’ll have to see how it works is that it extends the reporting requirements for social media to include violations of 18 USC 1591, which is the law against sex trafficking. Senator Marsha Blackburn, who is the co-author of the bill, is claiming that this means that “big tech companies will now be required to report when children are being trafficked, groomed or enticed by predators.”
So, it’s possible I’m misreading the law (and how it works with existing laws…) but I see nothing limiting this to “big tech.” It appears to apply to any “electronic communication service provider or remote computing service.”
Also, given that Marsha Blackburn appears to consider “grooming” to include things like LGBTQ content in schools, I worried that this was going to be a backdoor bill to making all internet websites have to “report” such content to NCMEC, which would flood their systems with utter nonsense. Thankfully, 1591 seems to include some pretty specific definitions of sex trafficking that do not match up with Blackburn’s definition. So she’ll get the PR victory among nonsense peddlers for pretending that it will lead to the reporting of the non-grooming that she insists is grooming.
And, of course, while this bill was actually good (and it’s surprising to see Blackburn on a good internet bill!) it’s not going to stop her from continuing to push KOSA and other nonsense moral panic “protect the children” bills that will actually do real harm.
Filed Under: csam, cybertipline, jon ossoff, marsha blackburn, modernization, report act, sex trafficking
Companies: ncmec
The Problems Of The NCMEC CyberTipline Apply To All Stakeholders
from the no-easy-answers dept
The failures of the NCMEC CyberTipline to combat child sexual abuse material (CSAM) as well as it could are extremely frustrating. But as you look at the details, you realize there just aren’t any particularly easy fixes. While there are a few areas that could improve things at the margin, the deeper you look, the more challenging the whole setup is. There aren’t any easy answers.
And that sucks, because Congress and the media often expect easy answers to complex problems. And that might not be possible.
This is the second post about the Stanford Internet Observatory’s report on the NCMEC CyberTipline, which is the somewhat useful, but tragically limited, main way that investigations of child sexual abuse material (CSAM) online is done. In the first post, we discussed the structure of the system, and how the incentive structure regarding law enforcement is a big part of what’s making the system less impactful than it otherwise might be.
In this post, I want to dig in a little more about the specific challenges in making the CyberTipline work better.
The Constitution
I’m not saying that the Constitution is a problem, but it represents a challenge here. In the first post, I briefly mentioned Jeff Kosseff’s important article about how the Fourth Amendment and the structure of NCMEC makes things tricky, but it’s worth digging in a bit here to understand the details.
The US government set up NCMEC as a private non-profit in part because if it were a government agency doing this work, there would be significant concerns about whether or not the evidence it gets was collected with or without a warrant under the Fourth Amendment. If it’s a government agency, then the law cannot require companies to hand over the info without a warrant.
So, Congress did a kind of two-step dance here: they set up this “private” non-profit, and then created a law that requires companies that come across CSAM online to report it to the organization. And all of this seems to rely on a kind of fiction that if we pretend NCMEC isn’t a government agent, then there’s no 4th Amendment issue.
From the Stanford report:
The government agent doctrine explains why Section 2258A allows, but does not require, online platforms to search for CSAM. Indeed, the statute includes an express disclaimer that it does not require any affirmative searching or monitoring. Many U.S. platforms nevertheless proactively monitor their services for CSAM, yielding millions of CyberTipline reports per year. Those searches’ legality hinges on their voluntariness. The Fourth Amendment prohibits unreasonable searches and seizures by the government; warrantless searches are typically considered unreasonable. The Fourth Amendment doesn’t generally bind private parties, however the government may not sidestep the Fourth Amendment by making a private entity conduct a search that it could not constitutionally do itself. If a private party acts as the government’s “instrument or agent” rather than “on his own initiative” in conducting a search, then the Fourth Amendment does apply to the search. That’s the case where a statute either mandates a private party to search or “so strongly encourages a private party to conduct a search that the search is not primarily the result of private initiative.” And it’s also true in situations where, with the government’s knowledge or acquiescence, a private actor carries out a search primarily to assist the government rather than to further its own purposes, though this is a case-by-case analysis for which the factors evaluated vary by court.
Without a warrant, searches by government agents are generally unconstitutional. The usual remedy for an unconstitutional search is for a court to throw out all evidence obtained as a result of it (the so-called “exclusionary rule”). If a platform acts as a government agent when searching a user’s account for CSAM, there is a risk that the resulting evidence could not be introduced against the user in court, making a conviction (or plea bargain) harder for the prosecution to obtain. This is why Section 2258A does not and could not require online platforms to search for CSAM: it would be unconstitutional and self-defeating.
In CSAM cases involving CyberTipline reports, defendants have tried unsuccessfully to characterize platforms as government agents whose searches were compelled by Section 2258A and/or by particular government agencies or investigators. But courts, pointing to the statute’s express disclaimer language (and, often, the testimony of investigators and platform employees), have repeatedly held that platforms are not government agents and their CSAM searches were voluntary choices motivated mainly by their own business interests in keeping such repellent material off their services.
So, it’s quite important that the service providers that are finding and reporting CSAM are not seen as agents of the government. It would destroy the ability to use that evidence in prosecuting cases. That’s important. And, as the report notes, it’s also why it would be a terrible idea to require social media to proactively try to hunt down CSAM. If the government required it, it would effectively light all that evidence on fire and prevent using it for prosecution.
That said, the courts (including in a ruling by Neil Gorsuch while he was on the appeals court) have made it clear that, while platforms may not be government agents, it’s pretty damn clear that NCMEC and the CyberTipline are. And that creates some difficulties.
In a landmark case called Ackerman, one federal appeals court held that NCMEC is a “governmental entity or agent.” Writing for the Tenth Circuit panel, then-judge Neil Gorsuch concluded that NCMEC counts as a government entity in light of NCMEC’s authorizing statutes and the functions Congress gave it to perform, particularly its CyberTipline functions. Even if NCMEC isn’t itself a governmental entity, the court continued, it acted as an agent of the government in opening and viewing the defendant’s email and four attached images that the online platform had (as required) reported to NCMEC. The court ruled that those actions by NCMEC were a warrantless search that rendered the images inadmissible as evidence. Ackerman followed a trial court-level decision, Keith, which had also deemed NCMEC a government agent: its review of reported images served law enforcement interests, it operated the CyberTipline for public not private interests, and the government exerts control over NCMEC including its funding and legal obligations. As an appellate-level decision, Ackerman carries more weight than Keith, but both have proved influential.
The private search doctrine is the other Fourth Amendment doctrine commonly raised in CSAM cases. It determines what the government or its agents may view without a warrant upon receiving a CyberTipline report from a platform. As said, the Fourth Amendment generally does not apply to searches by private parties. “If a private party conducted an initial search independent of any agency relationship with the government,” the private search doctrine allows law enforcement (or NCMEC) to repeat the same search so long as they do not exceed the original private search’s scope. Thus, if a platform reports CSAM that its searches had flagged, NCMEC and law enforcement may open and view the files without a warrant so long as someone at the platform had done so already. The CyberTipline form lets the reporting platform indicate which attached files it has reviewed, if any, and which files were publicly available.
For files that were not opened by the platform (such as where a CyberTipline submission is automated without any human review), Ackerman and a 2021 Ninth Circuit case called Wilson hold that the private search exception does not apply, meaning the government or its agents (i.e., NCMEC) may not open the unopened files without a warrant. Wilson disagreed with the position, adopted by two other appeals-court decisions, that investigators’ warrantless opening of unopened files is permissible if the files are hash matches for files that had previously been viewed and confirmed as CSAM by platform personnel. Ackerman concluded by predicting that law enforcement “will struggle not at all to obtain warrants to open emails when the facts in hand suggest, as they surely did here, that a crime against a child has taken place.”
To sum up: Online platforms’ compliance with their CyberTipline reporting obligations does not convert them into government agents so long as they act voluntarily in searching their platforms for CSAM. That voluntariness is crucial to maintaining the legal viability of the millions of reports platforms make to the CyberTipline each year. This imperative shapes the interactions between platforms and U.S.-based legislatures, law enforcement, and NCMEC. Government authorities must avoid crossing the line into telling or impermissibly pressuring platforms to search for CSAM or what to search for and report. Similarly, platforms have an incentive to maintain their CSAM searches’ independence from government influence and to justify those searches on rationales “separate from assisting law enforcement.” When platforms (voluntarily) report suspected CSAM to the CyberTipline, Ackerman and Wilson interpret the private search doctrine to let law enforcement and NCMEC warrantlessly open and view only user files that had first been opened by platform personnel before submitting the tip or were publicly available.
This is all pretty important in making sure that the whole system stays on the right side of the 4th Amendment. As much as some people really want to force social media companies to proactively search for and report CSAM, mandating that creates real problems under the 4th Amendment.
As for the NCMEC and law enforcement side of things, the requirement to get a warrant for unopened communications remains important. But, as noted below, sometimes law enforcement doesn’t want to get a warrant. If you’ve been reading Techdirt for any length of time, this shouldn’t surprise you. We see all sorts of areas where law enforcement refuses to take that basic step of getting a warrant.
Understanding that framing is important to understanding the rest of this, including exploring where each of the stakeholders fall down. Let’s start with the biggest problem of all: where law enforcement fails.
Law Enforcement
In the first article on this report, we noted that the incentive structure has made it such that law enforcement often tries to evade this entire process. It doesn’t want to go through the process of getting warrants some of the time. It doesn’t want to associate with the ICAC task forces because they feel like it puts too much of a burden on them, and if they don’t take care of it, someone else on the task force will. And sometimes they don’t want to deal with CyberTipline reports because they’re afraid that if they’re too slow after getting a report, they might face liability.
Most of these issues seem to boil down to law enforcement not wanting to do its job.
But the report details some of the other challenges for law enforcement. And it starts with just how many reports are coming in:
Almost across the board law enforcement expressed stress over their inability to fully investigate all CyberTipline reports due to constraints in time and resources. An ICAC Task Force officer said “You have a stack [of CyberTipline reports] on your desk and you have to be ok with not getting to it all today. There is a kid in there, it’s really quite horrible.” A single Task Force detective focused on internet crimes against children may be personally responsible for 2,000 CyberTipline reports each year. That detective is responsible for working through all of their tips and either sending them out to affiliates or investigating them personally. This process involves reading the tip, assessing whether a crime was committed, and determining jurisdiction; just determining jurisdiction might necessitate multiple subpoenas. Some reports are sent out to affiliates and some are fully investigated by detectives at the Task Force.
An officer at a Task Force with a relatively high CyberTipline report arrest rate said “we are stretched incredibly thin like everyone.” An officer in a local police department said they were personally responsible for 240 reports a year, and that all of them were actionable. When asked if they felt overwhelmed by this volume, they said yes. While some tips involve self-generated content requiring only outreach to the child, many necessitate numerous search warrants. Another officer, operating in a city with a population of 100,000, reported receiving 18–50 CyberTipline reports annually, actively investigating around 12 at any given time. “You have to manage that between other egregious crimes like homicides,” they said. This report will not extensively cover the issue of volume and law enforcement capacity, as this challenge is already well-documented and detailed in the 2021 U.S. Department of Homeland Security commissioned report, in Cullen et al., and in a 2020 Government Accountability Office report. “People think this is a one-in-a-million thing,” a Task Force officer said. “What they don’t know is that this is a crime of secrecy, and could be happening at four of your neighbors’ houses.”
And of course, making social media platforms more liable doesn’t help to fix much here. At best, it makes it worse because it encourages even more reporting by the platforms, which only further overloads law enforcement.
Given all those reports the cops are receiving, you’d hope they had a good system for managing them. But your hope would not be fulfilled:
Law enforcement pick a certain percentage of reports to investigate. The selection is not done in a very scientific way—one respondent described it as “They hold their finger up in the air to feel the wind.” An ICAC Task Force officer said triage is more of an art than a science. They said that with experience you get a feel for whether a case will have legs, but that you can never be certain, and yet you still have to prioritize something.
That seems less than ideal.
Another problem, though, is that a lot of the reports are not prosecutable at all. Because of the incentives discussed in the first post, apparently certain known memes get reported to the CyberTipline quite frequently, and police feel they just clog up the system. But because the platforms fear significant liability if they don’t report those memes, they keep reporting them.
U.S. law requires that platforms report this content if they find it, and that NCMEC send every report to law enforcement. When NCMEC knows a report contains viral content or memes they will label it “informational,” a category that U.S. law enforcement typically interpret as meaning the report can be ignored, but not all such reports get labeled “informational.” Additionally there are an abundance of “age difficult” reports that are unlikely to lead to prosecution. Law enforcement may have policies requiring some level of investigation or at least processing into all noninformational reports. Consequently, officers often feel inundated with reports unlikely to result in prosecution. In this scenario, neither the platforms, NCMEC, nor law enforcement agencies feel comfortable explicitly ignoring certain types of reports. An employee from a platform that is relatively new to NCMEC reporting expressed the belief that “It’s best to over-report, that’s what we think.”
At best, this seems to annoy law enforcement, but it’s a function of how the system works:
An officer expressed frustration over platforms submitting CyberTipline reports that, in their view, obviously involve adults: “Tech companies have the ability to […] determine with a high level of certainty if it’s an adult, and they need to stop sending [tips of adults].” This respondent also expressed a desire that NCMEC do more filtering in this regard. While NCMEC could probably do this to some extent, they are again limited by the fact that they cannot view an image if the platform did not check the “reviewed” box (Figure 5.3 on page 26). NCMEC’s inability to use cloud services also makes it difficult for them to use machine learning age classifiers. When we asked NCMEC about the hurdles they face, they raised the “firehose of I’ll just report everything” problem.
Again, this all seems pretty messy. Of course you want companies to report anything they find that might be CSAM. And, of course, you want NCMEC to pass them on to law enforcement. But the end result is overwhelmed law enforcement with no clear process for triage and dealing with a lot of reports that were sent in an abundance of caution but which are not at all useful to law enforcement.
And, of course, there are other challenges that policymakers probably don’t think about. For example: how do you deal with hacked accounts? How much information is it right for the company to share with law enforcement?
One law enforcement officer provided an interesting example of a type of report he found frustrating: he said he frequently gets reports from one platform where an account was hacked and then used to share CSAM. This platform provided the dates of multiple password changes in the report, which the officer interpreted as indicating the account had been hacked. Despite this, they felt obligated to investigate the original account holder. In a recent incident they described, they were correct that the account had been hacked. They expressed that if the platform explicitly stated their suspicion in the narrative section of the report, such as by saying something like “we think this account may have been hacked,” they would then feel comfortable de-prioritizing these tips. We subsequently learned from another respondent that this platform provides time stamps for password changes for all of their reports, putting the burden on law enforcement to assess whether the password changes were of normal frequency, or whether they reflected suspicious activity.
With that said, the officer raised a valid issue: whether platforms should include their interpretation of the information they are reporting. One platform employee we interviewed who had previously worked in law enforcement acknowledged that they would have found the platform’s unwillingness to explicitly state their hunch frustrating as well. However, in their current role they also would not have been comfortable sharing a hunch in a tip: “I have preached to the team that anything they report to NCMEC, including contextual information, needs to be 100% accurate and devoid of personal interpretation as much as possible, in part because it may be quoted in legal process and case reports down the line.” They said if a platform states one thing in a tip, but law enforcement discovers that is not the case, that could make it more difficult for law enforcement to prosecute, and could even ruin their case. Relatedly, a former platform employee said some platforms believe if they provide detailed information in their reports courts may find the reports inadmissible. Another platform employee said they avoid sharing such hunches for fear of it creating “some degree of liability [even if ] not legal liability” if they get it wrong
The report details how local prosecutors are also loathe to bring cases, because it’s tricky to find a jury who can handle a CSAM case:
It is not just police chiefs who may shy away from CSAM cases. An assistant U.S. attorney said that potential jurors will disqualify themselves from jury duty to avoid having to think about and potentially view CSAM. As a result, it can take longer than normal to find a sufficient number of jurors, deterring prosecutors from taking such cases to trial. There is a tricky balance to strike in how much content to show jurors, but viewing content may be necessary. While there are many tools to mitigate the effect of viewing CSAM for law enforcement and platform moderators, in this case the goal is to ensure that those viewing the content understand the horror. The assistant U.S. attorney said that they receive victim consent before showing the content in the context of a trial. Judges may also not want to view content, and may not need to if the content is not contested, but seeing it can be important as it may shape sentencing decisions.
There are also issues outside the US with law enforcement. As noted in the first article, NCMEC has become the de facto global reporting center, because so many companies are based in the US and report there. And the CyberTipline tries to share out to foreign law enforcement too, but that’s difficult:
For example, in the European Union, companies’ legal ability to voluntarily scan for CSAM required the passage of a special exception to the EU’s so-called “ePrivacy Directive”. Plus, against a background where companies are supposed to retain personal data no longer than reasonably necessary, EU member states’ data retention laws have repeatedly been struck down on privacy grounds by the courts for retention periods as short as four or ten weeks (as in Germany) and as long as a year (as in France). As a result, even if a CyberTipline report had an IP address that was linked to a specific individual and their physical address at the time of the report, it may not be possible to retrieve that information after some amount of time.
Law enforcement agencies abroad have varying approaches to CyberTipline reports and triage. Some law enforcement agencies will say if they get 500 CyberTipline reports a year, that will be 500 cases. Another country might receive 40,000 CyberTipline reports that led to just 150 search warrants. In some countries the rate of tips leading to arrests is lower than in the U.S. Some countries may find that many of their CyberTipline reports are not violations of domestic law. The age of consent may be lower than in the U.S., for example. In 2021 Belgium received about 15,000 CyberTipline reports, but only 40% contained content that violated Belgium law
And in lower income countries, the problems can be even worse, including confusion about how the entire CyberTipline process works.
We interviewed two individuals in Mexico who outlined a litany of obstacles to investigating CyberTipline reports even where a child is known to be in imminent danger. Mexican federal law enforcement have a small team of people who work to process the reports (in 2023 Mexico received 717,468 tips), and there is little rotation. There are people on this team who have been viewing CyberTipline reports day in and day out for a decade. One respondent suggested that recent laws in Mexico have resulted in most CyberTipline reports needing to be investigated at the state level, but many states lack the know-how to investigate these tips. Mexico also has rules that require only specific professionals to assess the age of individuals in media, and it can take months to receive assessments from these individuals, which is required even if the image is of a toddler
The investigator also noted that judges often will not admit CyberTipline reports as evidence because they were provided proactively and not via a court order as part of an investigation. They may not understand that legally U.S. platforms must report content to NCMEC and that the tips are not an extrajudicial invasion of privacy. As a result, officers may need a court order to obtain information that they already have in the CyberTipline report, confusing platforms who receive requests for data they put in a report a year ago. This issue is not unique to Mexico; NCMEC staff told us that they see “jaws drop” in other countries during trainings when they inform participants about U.S. federal law that requires platforms to report CSAM.
NCMEC Itself
The report also details some of the limitations of NCMEC and the CyberTipline itself, some of which are legally required (and where it seems like the law should be updated).
There appears to be a big issue with repeat reports, where NCMEC needs to “deconflict” them, but has limited technology to do so:
Improvements to the entity matching process would improve CyberTipline report prioritization processes and detection, but implementation is not always as straightforward as it might appear. The current automated entity matching process is based solely on exact matches. Introducing fuzzy matching, which would catch similarity between, for example, bobsmithlovescats1 and bobsmithlovescats2, could be useful in identifying situations where a user, after suspension, creates a new account with an only slightly altered username. With a more expansive entity matching system, a law enforcement officer proposed that tips could gain higher priority if certain identifiers are found across multiple tips. This process, however, may also require an analyst in the loop to assess whether a fuzzy match is meaningful.
It is common to hear of instances where detectives received dozens of separate tips for the same offender. For instance, the Belgium Federal Police noted receiving over 500 distinct CyberTipline reports about a single offender within a span of five months. This situation can arise when a platform automatically submits a tip each time a user attempts to upload CSAM; if the same individual tries to upload the same CSAM 60 times, it could result in 60 separate tips. Complications also arise if the offender uses a Virtual Private Network (VPN); the tips may be distributed across different law enforcement agencies. One respondent told us that a major challenge is ensuring that all tips concerning the same offender are directed to the same agency and that the detective handling them is aware that these numerous tips pertain to a single individual.
As the report notes, there are a variety of challenges, both economic and legal, in enabling NCMEC to upgrade its technology:
First, NCMEC operates with a limited budget and as a nonprofit they may not be able to compete with industry salaries for qualified technical staff. The status quo may be “understandable given resource constraints, but the pace at which industry moves is a mismatch with NCMEC’s pace.” Additionally, NCMEC must also balance prioritizing improving the CyberTipline’s technical infrastructure with the need to maintain the existing infrastructure, review tips, or execute other non-Tipline projects at the organization. Finally, NCMEC is feeding information to law enforcement, which work within bureaucracies that are also slow to update their technology. A change in how NCMEC reports CyberTipline information may also require law enforcement agencies to change or adjust their systems for receiving that information.
NCMEC also faces another technical constraint not shared with most technology companies: because the CyberTipline processes harmful and illegal content, it cannot be housed on commercially available cloud services. While NCMEC has limited legal liability for hosting CSAM, other entities currently do not, which constrains NCMEC’s ability to work with outside vendors. Inability to transfer data to cloud services makes some of NCMEC’s work more resource intensive and therefore stymies some technical developments. Cloud services provide access to proprietary machine learning models, hardware-accelerated machine learning training and inference, on-demand resource availability and easier to use services. For example, with CyberTipline files in the cloud, NCMEC could more easily conduct facial recognition at scale and match photos from the missing children side of their work with CyberTipline files. Access to cloud services could potentially allow for scaled detection of AI-generated images and more generally make it easier for NCMEC to take advantage of existing machine learning classifiers. Moving millions of CSAM files to cloud services is not without risks, and reasonable people disagree about whether the benefits outweigh the risks. For example, using a cloud facial recognition service would mean that a third party service likely has access to the image. There are a number of pending bills in Congress that, if passed, would enable NCMEC to use cloud services for the CyberTipline while providing the necessary legal protections to the cloud hosting providers.
Platforms
And, yes, there are some concerns about the platforms. But while public discussion seems to focus almost exclusively on where people think that platforms have failed to take this issue seriously, the report suggests the failures of platforms are much more limited.
The report notes that it’s a bit tricky to get platforms up and running with CyberTipline reporting, and that even as NCMEC will do some onboarding, it’s very limited to avoid some of the 4th Amendment concerns talked about above.
And, again, some of the problem with onboarding is due to outdated tech on NCMEC’s side. I mean… XML? Really?
Once NCMEC provides a platform with an API key and the corresponding manual, integrating their workflow with the reporting API can still present challenges. The API is XML-based, which requires considerably more code to integrate with than simpler JSON-based APIs and may be unfamiliar to younger developers. NCMEC is aware that this is an issue. “Surprisingly large companies are using the manual form,” one respondent said. One respondent at a small platform had a more moderate view; he thought the API was fine and the documentation “good.” But another respondent called the API “crap.”
There are also challenges under the law about what needs to be reported. As noted above and in the first article, that can often lead to over-reporting. But it can also make things difficult for companies trying to make determinations.
Platforms will additionally face policy decisions. While prohibiting illegal content is a standard approach, platforms often lack specific guidelines for moderators on how to interpret nuanced legal terms such as “lascivious exhibition.” This term is crucial for differentiating between, for example, an innocent photo of a baby in a bathtub, and a similar photo that appears designed to show the baby in a way that would be sexually arousing to a certain type of viewer. Trust and safety employees will need to develop these policies and train moderators.
And, of course, as has been widely discussed elsewhere, it’s not great that platforms have to hire human beings and expose them to this kind of content.
However, the biggest issue on reporting seems to not be a company’s unwillingness to do so, but how much information they pass along. And again, here, the issue is not so much unwillingness of the companies to be cooperative, but the incentives.
Memes and viral content pose a huge challenge for CyberTipline stakeholders. In the best case scenario, a platform checks the “Potential Meme” box and NCMEC automatically sends the report to an ICAC Task Force as “informational,” which appears to mean that no one at the Task Force needs to look at the report.
In practice, a platform may not check the “Potential Meme” box (possibly due to fixable process issues or minor changes in the image that change the hash value) and also not check the “File Viewed by Company” box. In this case NCMEC is unable to view the file, due to the Ackerman and Wilson decisions as discussed in Chapter 3. A Task Force could view the file without a search warrant and realize it is a meme, but even in that scenario it takes several minutes to close out the report. At many Task Forces there are multiple fields that have to be entered to close the report, and if Task Forces are receiving hundreds of reports of memes this becomes hugely time consuming. Sometimes, however, law enforcement may not realize the report is a meme until they have invested valuable time into getting a search warrant to view the report.
NCMEC recently introduced the ability for platforms to “batch report” memes after receiving confirmation from NCMEC that that meme is not actionable. This lets NCMEC label the whole batch as informational, which reduces the burden on law enforcement
We heard about an example where a platform classified a meme as CSAM, but NCMEC (and at least one law enforcement officer we spoke to about this meme) did not classify it as CSAM. NCMEC told the platform they did not classify the meme as CSAM, but according to NCMEC the platform said because they do consider it CSAM they were going to continue to report it. Because the platform is not consistently checking the “Potential Meme” box, law enforcement are still receiving it at scale and spending substantial time closing out these reports.
There is a related challenge when a platform neglects to mark content as “viral”. Most viral images are shared in outrage, not with an intent to harm. However, these viral images can be very graphic. The omission of the “viral” label can lead law enforcement to mistakenly prioritize these cases, unaware that the surge in reports stems from multiple individuals sharing the same image in dismay.
We spoke to one platform employee about the general challenge of a platform deeming a meme CSAM while NCMEC or law enforcement agencies disagree. They noted that everyone is doing their best to apply the Dost test. Additionally, there is no mechanism to get an assurance that a file is not CSAM: “No one blesses you and says you’ve done what you need to do. It’s a very unsettling place to be.” They added that different juries might come to different conclusions about what counts as CSAM, and if a platform fails to report a file that is later deemed CSAM the platform could be fined $300,000 and face significant public backlash: “The incentive is to make smart, conservative decisions.”
This is all pretty fascinating, and suggests that while there may be ways to improve things, it’s difficult to structure things right and make the incentives align properly.
And, again, the same incentives pressure the platforms to just overreport, no matter what:
Once a platform integrates with NCMEC’s CyberTipline reporting API, they are incentivized to overreport. Consider an explicit image of a 22-year-old who looks like they could be 17: if a platform identified the content internally but did not file a report and it turned out to be a 17-year-old, they may have broken the law. In such cases, they will err on the side of caution and report the image. Platform incentives are to report any content that they think is violative of the law, even if it has a low probability of prosecution. This conservative approach will also lead to reports from what Meta describes as “non-malicious users”—for instance, individuals sharing CSAM in outrage. Although such reports could theoretically yield new findings, such as uncovering previously unknown content, it is more likely that they overload the system with extraneous reports
All in all, the real lesson to be taken from this report is that this shit is super complicated, like all of trust & safety, and tradeoffs abound. But here it’s way more fraught than in most cases, both in terms of the seriousness of the issue, the potential for real harm, and the potentially destructive criminal penalties involved.
The report has some recommendations, though they mostly seem to deal with things at the margins: increase funding for NCMEC, allow it to update its technology (and hire the staff to do so), and have some more information to help platforms get set up.
Of course, what’s notable is that this does not include things like “make platforms liable for any mistake they make.” This is because, as the report shows, most platforms seem to take this stuff pretty seriously already, and the liability is already very clear, to the point that they are often over-reporting to avoid it, and that’s actually making the results worse, because they’re overwhelming both NCMEC and law enforcement.
All in all, this report is a hugely important contribution to this discussion, and provides a ton of real-world information about the CyberTipline that were basically only known to people working on it, leaving many observers, media and policymakers in the dark.
It would be nice if Congress reads this report and understands the issues. However, when it comes to things like CSAM, expecting anyone to bother with reading a big report and understanding the tradeoffs and nuances is probably asking too much.
Filed Under: csam, cybertipline, incentives, overreporting
Companies: ncmec
Our Online Child Abuse Reporting System Is Overwhelmed, Because The Incentives Are Screwed Up & No One Seems To Be Able To Fix Them
from the mismatched-incentives-are-the-root-of-all-problems dept
The system meant to stop online child exploitation is failing — and misaligned incentives are to blame. Unfortunately, today’s political solutions, like KOSA and STOP CSAM, don’t even begin to grapple with any of this. Instead, they prefer to put in place solutions that could make the incentives even worse.
The Stanford Internet Observatory has spent the last few months doing a very deep dive on how the CyberTipline works (and where it struggles). It has released a big and important report detailing its findings. In writing up this post about it, I kept adding more and more, to the point that I finally decided it made sense to split it up into two separate posts to keep things manageable.
This first post covers the higher level issue: what the system is, why it works the way it does, and how the incentive structure of the system is completely messed up (even if it was done with good intentions), and how that’s contributed to the problem. A follow-up post will cover the more specific challenges facing NCMEC itself, law enforcement, and the internet platforms themselves (who often take the blame for CSAM, when that seems extremely misguided).
There is a lot of misinformation out there about the best way to fight and stop the creation and spread of child sexual abuse material (CSAM). It’s unfortunate because it’s a very real and very serious problem. Yet the discussion about it is often so disconnected from reality as to be not just unhelpful, but potentially harmful.
In the US, the system that was set up is the CyberTipline, which is run by NCMEC, the National Center on Missing and Exploited Children. It’s a private, non-profit; however, it has a close connection with the US government, which helped create it. At times, there has been some confusion about whether or not NCMEC is a government agent. The entire setup of it was designed to keep it as non-governmental, to avoid any 4th Amendment issues with the information it collects, but courts haven’t always seen it that way, which makes it tricky (even as the 4th Amendment is important).
And while the system was designed for the US, it has become a defacto global system, since so many of the companies are US based, and NCMEC will, when it can, send relevant details to foreign law enforcement as well (though, as the report details, that doesn’t always work well).
The main role CyberTipline has taken on is coordination. It takes in reports of CSAM (mostly, but not entirely, from internet platforms) and then, when relevant, hands off the necessary details to the (hopefully) correct law enforcement agency to handle things.
Companies that host user-generated content have certain legal requirements to report CSAM to the CyberTipline. As we discussed in a recent podcast, this role as a “mandatory reporter” is important in providing useful information to allow law enforcement to step in and actually stop abusive behavior. Because of the “government agent” issue, it would be unconstitutional to require social media platforms to proactively search for and identify CSAM (though many do use tools to do this). However, if they do find some, they must report it.
Unfortunately, the mandatory reporting has also allowed the media and politicians to use the number of reports sent in by social media companies in a misleading manner, suggesting that the mere fact that these companies find and report to NCMEC means that they’re not doing enough to stop CSAM on their platforms.
This is problematic because it creates a dangerous incentive, suggesting that internet services should actually not report CSAM they found, as politicians and the media will falsely portray a lot of reports as being a sign of a failure by the platforms to take this seriously. The reality is that the failure to take things seriously comes from the small number of platforms (Hi Telegram!) who don’t report CSAM at all.
Some of us from the outside have thought that the real issue was that NCMEC and law enforcement had been unsuccessful on the receiving end to take those reports and do enough that was productive with them. It seemed convenient for the media and politicians to just blame social media companies for doing what they’re supposed to do (reporting CSAM), ignoring that what happened on the back end of the system might be the real problem. That’s why things like Senator Ron Wyden’s Invest in Child Safety Act seemed like a better approach than things like KOSA or the STOP CSAM Act.
That’s because the approach of KOSA/STOP CSAM and some other bills is basically to add liability to social media companies. (These companies already do a ton to prevent CSAM from appearing on the platform and alert law enforcement via the CyberTipline when they do find stuff.) But that’s useless if those receiving the reports aren’t able to do much with them.
What becomes clear from this report is that while there are absolutely failures on the law enforcement side, some of that is effectively baked into the incentive structure of the system.
In short, the report shows that the CyberTipline is very helpful in engaging law enforcement to stop some child sexual abuse, but it’s not as helpful as it might otherwise be:
Estimates of how many CyberTipline reports lead to arrests in the U.S. range from 5% to 7.6%
This number may sound low, but I’ve been told it’s not as bad as it sounds. First of all, when a large number of the reports are for content that is overseas and not in the US, it’s more difficult for law enforcement here to do much about it (though, again, the report details some suggestions on how to improve this). Second, some of the content may be very old, where the victim was identified years (or even decades) ago, and where there’s less that law enforcement can do today. Third, there is a question of prioritization, with it being a higher priority to target those directly abusing children. But, still, as the report notes, almost everyone thinks that the arrest number could go higher if there were more resources in place:
Empirically, it is unknown what percent of reports, if fully investigated, would lead to the discovery of a person conducting hands-on abuse of a child. On the one hand, as an employee of a U.S. federal department said, “Not all tips need to lead to prosecution […] it’s like a 911 system.”10 On the other hand, there is a sense from our respondents—who hold a wide array of beliefs about law enforcement—that this number should be higher. There is a perception that more than 5% of reports, if fully investigated, would lead to the discovery of hands-on abuse.
The report definitely suggests that if NCMEC had more resources dedicated to the CyberTipline, it could be more effective:
NCMEC has faced challenges in rapidly implementing technological improvements that would aid law enforcement in triage. NCMEC faces resource constraints that impact salaries, leading to difficulties in retaining personnel who are often poached by industry trust and safety teams.
There appear to be opportunities to enrich CyberTipline reports with external data that could help law enforcement more accurately triage tips, but NCMEC lacks sufficient technical staff to implement these infrastructure improvements in a timely manner. Data privacy concerns also affect the speed of this work.
But, before we get into the specific areas where things can be improved in the follow-up post, I thought it was important to highlight how the incentives of this system contribute to the problem, where there isn’t necessarily an easy solution.
While companies (Meta, mainly, since it represents, by a very wide margin, the largest number of reports to the CyberTipline) keep getting blamed for failing to stop CSAM because of its large number of reports, most companies have very strong incentives to report anything they find. This is because the cost for not reporting something they should have reported is massive (criminal penalties), whereas the cost for over-reporting is nothing to the companies. That means, there’s an issue with overreporting.
Of course, there is a real cost here. CyberTipline employees get overwhelmed, and that can mean that reports that should get prioritized and passed on to law enforcement don’t. So you can argue that while the cost of over-reporting is “nothing” to the companies, the cost to victims and society at large can be quite large.
That’s an important mismatch.
But the broken incentives go further as well. When NCMEC hands off reports to law enforcement, they often go through a local ICAC (Internet Crimes Against Children) task force, who will help triage it and find the right state or local law enforcement agency to handle the report. Different law enforcement agencies who are “affiliated” with ICACs receive special training on how to handle reports from the CyberTipline. But, apparently, at least some of them feel that it’s just too much work, or (in some cases) too burdensome to investigate. That means that some law enforcement agencies are choosing not to affiliate with their local ICACs to avoid this added work. Even worse, some law enforcement agencies have “unaffiliated” themselves with the local ICAC because they just don’t want to deal with it.
In some cases, there are even reports of law enforcement unaffiliating with an ICAC out of a fear of facing liability for not investigating an abused child quickly enough.
A former Task Force officer described the barriers to training more local Task Force affiliates. In some cases local law enforcement perceive that becoming a Task Force affiliate is expensive, but in fact the training is free. In other cases local law enforcement are hesitant to become a Task Force affiliate because they will be sent CyberTipline reports to investigate, and they may already feel like they have enough on their plate. Still other Task Force affiliates may choose to unaffiliate, perceiving that the CyberTipline reports they were previously investigating will still get investigated at the Task Force, which further burdens the Task Force. Unaffiliating may also reduce fear of liability for failing to promptly investigate a report that would have led to the discovery of a child actively being abused, but the alternative is that the report may never be investigated at all.
[….]
This liability fear stems from a case where six months lapsed between the regional Task Force receiving NCMEC’s report and the city’s police department arresting a suspect (the abused children’s foster parent). In the interim, neither of the law enforcement agencies notified child protective services about the abuse as required by state law. The resulting lawsuit against the two police departments and the state was settled for $10.5 million. Rather than face expensive liability for failing to prioritize CyberTipline reports ahead of all other open cases, even homicide or missing children, the agency might instead opt to unaffiliate from the ICAC Task Force.
This is… infuriating. Cops choosing to not affiliate (i.e., get the necessary training to help) or removing themselves from an ICAC task force because they’re afraid if they don’t help save kids from abuse that they might get sued is ridiculous. It’s yet another example of cops running away, rather than doing the job they’re supposed to be doing, but which they claim they have no obligation to do.
That’s just one problem of many in the report, which we’ll get into in the second post. But, on the whole, it seems pretty clear that with the incentives this out of whack, something like KOSA or STOP CSAM aren’t going to be of much help. Actually tackling the underlying issues, the funding, the technology, and (most of all) the incentive structures, is necessary.
Filed Under: csam, cybertipline, icac, incentives, kosa, law enforcement, liability, stop csam
Companies: ncmec