Twitter suspended 400K for child abuse content but only reported 8K to police (original) (raw)

Cutting corners on child safety

Twitter’s internal detection of child sexual abuse materials may be failing.

Credit: NurPhoto / Contributor | NurPhoto

Credit: NurPhoto / Contributor | NurPhoto

Last week, Twitter Safety tweeted that the platform is now “moving faster than ever” to remove child sexual abuse materials (CSAM). It seems, however, that’s not entirely accurate. Child safety advocates told The New York Times that after Elon Musk took over, Twitter started taking twice as long to remove CSAM flagged by various organizations.

The platform has since improved and is now removing CSAM almost as fast as it was before Musk’s takeover—responding to reports in less than two days—The Times reported. But there still seem to be issues with its CSAM reporting system that continue to delay response times. In one concerning case, a Canadian organization spent a week notifying Twitter daily—as the illegal imagery of a victim younger than 10 spread unchecked—before Twitter finally removed the content.

"From our standpoint, every minute that that content's up, it's re-victimizing that child," Gavin Portnoy, vice president of communications for the National Center for Missing and Exploited Children (NCMEC), told Ars. "That's concerning to us."

Twitter trust and safety chief Ella Irwin tweeted last week that combating CSAM is “incredibly hard,” but remains Twitter Safety’s “No. 1 priority.” Irwin told The Times that despite challenges, Twitter agrees with experts and is aware that much more can be done to proactively block exploitative materials. Experts told the Times that Twitter’s understaffing of its trust and safety team is a top concern, and sources confirmed that Twitter has stopped investing in partnerships and technology that were previously working to improve the platform’s effectiveness at rapidly removing CSAM.

“In no way are we patting ourselves on the back and saying, ‘Man, we’ve got this nailed,’” Irwin told The Times.

Twitter did not respond to Ars’ request for comment.

Red flags raised by Twitter’s low-budget CSAM strategy

Twitter Safety tweeted that in January, Twitter suspended approximately 404,000 accounts that created, distributed, or engaged with CSAM. This was 112 percent more account suspensions than the platform reported in November, Twitter said, backing up its claim that it has been moving “faster than ever.”

In the same tweet thread, Twitter promised that the company has been “building new defenses that proactively reduce the discoverability” of tweets spreading CSAM. The company did not provide much clarity on what these new defense measures included, only reporting a vague claim that one such new defense against child sexual exploitation (CSE) “reduced the number of successful searches for known CSE patterns by over 99% since December.”

Portnoy told Ars that NCMEC is concerned that what Twitter is publicly reporting doesn't match what NCMEC sees in its own Twitter data from its cyber tipline.

"You've got Ella Irwin out there saying that they're taking down more than ever, it's priority number one, and what we're seeing on our end, our data isn't showing that," Portnoy told Ars.

Other child safety organizations have raised some red flags over how Twitter has been handling CSAM in this same time period. Sources told the Times that Twitter has stopped paying for CSAM-detection software built by an anti-trafficking organization, Thorn, while also cutting off any continued collaboration on improving that software. Portnoy confirmed to Ars that NCMEC and Twitter remain seemingly divided by a disagreement over Twitter’s policy not to report to authorities all suspended accounts spreading CSAM.

Out of 404,000 suspensions in January, Twitter reported approximately 8,000 accounts. Irwin told the Times that Twitter is only obligated to report suspended accounts to authorities when the company has “high confidence that the person is knowingly transmitting” CSAM. Any accounts claiming to be selling or distributing CSAM off of Twitter—but not directly posting CSAM on Twitter—seemingly don’t meet Twitter’s threshold for reporting to authorities. Irwin confirmed that most Twitter account suspensions “involved accounts that engaged with the material or were claiming to sell or distribute it, rather than those that posted it,” the Times reported.

Portnoy said that the reality is that these account suspensions "very much do warrant cyber tips."

"If we can get that information, we might be able to get the child out of harm's way or give something actionable to law enforcement, and the fact that we're not seeing that stuff is concerning," Portnoy told Ars.

The Times wanted to test out how well Twitter was working to combat CSAM. So the organization created an automated computer program to detect CSAM without displaying any illegal imagery, partnering with the Canadian Center for Child Protection to cross-reference CSAM found with illegal content previously identified in the center’s database. The Canadian center’s executive director, Lianna McDonald, tweeted this morning to encourage more child safety groups to speak out against Twitter seemingly becoming a platform of choice for Internet users on the dark web openly discussing strategies for finding CSAM on Twitter.

“This reporting begs the question: Why is it that verified CSAM (i.e., images known to industry and police) can be uploaded and hosted on Twitter without being immediately detected by image or video blocking technology?” McDonald tweeted. “In addition to the issue of image and video detection, Twitter also has a problem with the way it is used by offenders to promote, in plain sight, links to CSAM on other websites.”

While Irwin seems confident that Twitter is “getting a lot better” at moderating CSAM, some experts told the Times that Twitter wasn’t even taking basic steps to prioritize child safety as the company claims it has been. Lloyd Richardson, the technology director at the Canadian center, which ran its own scan for CSAM on Twitter to complement the Times' analysis, told the Times that “the volume we’re able to find with a minimal amount of effort is quite significant.”

What steps should Twitter be taking?

McDonald recommended that Twitter combat CSAM more proactively by implementing tools like age verification systems to prevent anonymous uploads and investigating whether its proactive detection systems are “making full and complete use of available hash databases” to stop previously identified CSAM from spreading.

Alex Stamos, the director of the Stanford Internet Observatory, told Ars that because Twitter has access to the exact same Microsoft PhotoDNA API that the Times used to readily detect CSAM on the platform, Twitter should investigate whether its internal scanning tools might be failing. To get Twitter back on track—as word on the dark web spreads that it’s easy to find CSAM on Twitter—Stamos also recommends rebuilding “the child safety team that was decimated by Twitter’s staff cuts." But Stamos said that won’t be easy.

“These are very hard people to hire, as the number of folks with the technical or investigatory capabilities you need who are willing to do this work is very small, and Musk’s public behavior has probably driven several qualified people away from considering the roles,” Stamos told Ars.

Irwin did not comment on tensions rising from Twitter’s wavering partnerships with child safety organizations but told the Times that Twitter has been expanding its child safety team.

Whether Twitter users can trust Irwin’s word regarding safety issues on the platform was a question raised in December when Musk contradicted Irwin’s statements over another Twitter safety feature that had been providing resources for users who may be considering self-harm or suicide. Musk disavowed Irwin’s statement that the resource temporarily was removed as “fake news,” and then the matter was seemingly quickly dropped, as Irwin continued on as trust and safety chief. No one ever addressed the contradicting reports from Twitter, but Twitter's Community Notes feature supported Musk's comments as the truth, not Irwin's.

Irwin staying on despite having her comments disavowed by Musk is likely a comfort to some child safety organizations that have grown weary of Twitter’s currently low levels of child safety team staffing. NCMEC executive John Shehan told the Times that he was concerned about Twitter’s “high-level turnover,” partly because that turnover makes it harder to truly track the company’s stance on “trust and safety and their commitment to identifying and removing child sexual abuse material from their platform.”

Portnoy told Ars that despite the uptick in moderation in January, NCMEC is still seeing a degradation in Twitter's response time to remove CSAM. NCMEC would prefer to see illegal content removed within minutes or hours, not days, and Portnoy told Ars that Twitter had been moving in that direction when suddenly, NCMEC observed the drop in Twitter response time, just after Musk took over.

"What the heck happened?" Portnoy said. "They were doing really good. Then all this change happened, and now they're not so good."

Listing image: NurPhoto / Contributor | NurPhoto

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

97 Comments

  1. Listing image for first story in Most Read: Explicit deepfake scandal shuts down Pennsylvania school