Pretty Much Every Expert Agrees That Elon Has Made Twitter’s Child Sexual Abuse Problem Worse (original) (raw)

from the not-great,-bob dept

About a month ago, we wrote an article pulling together a variety of sources, including an NBC News investigation, that suggested that Elon Musk’s Twitter was doing a terrible job dealing with child sexual abuse material (CSAM) on the platform. This was contrary to the claims of a few very vocal Elon supporters, including one who somehow got an evidence-free article published in a major news publication, insisting that he had magically “solved” the CSAM issue, despite firing most of the people who worked on it. As we noted last month, it actually appeared that Elon’s Twitter was not just failing to deal with CSAM (which is a massive challenge on any platform), but that he was actually going backwards and making the issue much, much worse.

Last week, Senator Dick Durbin released a letter he sent to the Attorney General, asking the DOJ to investigate Twitter’s failures at stopping CSAM.

I write to express my grave concern that Twitter is failing to prevent the selling and trading of child sexual abuse material (CSAM) on its platform and to urge the Department of Justice (DOJ) to take all appropriate actions to investigate, deter, and stop this activity, which has no protections under the First Amendment, and violates federal criminal law.

The last two points are important: CSAM is (obviously) not protected speech, and as it violates federal criminal law, Section 230 is not relevant (lots of 230 haters seem to forget this important point). Of course, there is still the issue of knowledge. You still can’t hold a platform liable for things it didn’t know about. But, deliberately turning a blind eye to CSAM (while stating publicly that it was the number one priority) is still really bad.

Now, a NY Times investigation has gone much, much further into this issue and found, as NBC News did, that Twitter isn’t just failing to deal with CSAM, it has made a ton of really, really questionable decisions regarding how it handles the problem. The NY Times report notes that it used some tools to investigate CSAM on Twitter without looking at the material itself. While it doesn’t go into detail, from what’s stated, it sounds like wrote some software to identify potential CSAM, without looking at it, and then forwarded the accounts to the Canadian Center for Child Protection and also to Microsoft, which created and runs PhotoDNA, the tool that many large companies use to identify CSAM on platforms and to report that content to NCMEC (the National Center for Missing and Exploited Children) in the US and the CCCP in Canada (and other organizations elsewhere). And what they found is not great:

To assess the company’s claims of progress, The Times created an individual Twitter account and wrote an automated computer program that could scour the platform for the content without displaying the actual images, which are illegal to view. The material wasn’t difficult to find. In fact, Twitter helped promote it through its recommendation algorithm — a feature that suggests accounts to follow based on user activity.

Among the recommendations was an account that featured a profile picture of a shirtless boy. The child in the photo is a known victim of sexual abuse, according to the Canadian Center for Child Protection, which helped identify exploitative material on the platform for The Times by matching it against a database of previously identified imagery.

That same user followed other suspicious accounts, including one that had “liked” a video of boys sexually assaulting another boy. By Jan. 19, the video, which had been on Twitter for more than a month, had gotten more than 122,000 views, nearly 300 retweets and more than 2,600 likes. Twitter later removed the video after the Canadian center flagged it for the company.

Even Twitter’s responses to requests from the government agencies dealing with this stuff did not go well:

One account in late December offered a discounted “Christmas pack” of photos and videos. That user tweeted a partly obscured image of a child who had been abused from about age 8 through adolescence. Twitter took down the post five days later, but only after the Canadian center sent the company repeated notices.

As an aside, I’m curious how all the people insisting that no government agency should ever alert Twitter to content that might be illegal or violate its policies feel about the Canadian Center alerting Twitter to CSAM on its platform.

And the article notes that Twitter seems to be ignoring a lot of the more easily findable stuff for organizations that have access to these types of tools:

The center also did a broader scan against the most explicit videos in their database. There were more than 260 hits, with more than 174,000 likes and 63,000 retweets.

“The volume we’re able to find with a minimal amount of effort is quite significant,” said Lloyd Richardson, the technology director at the Canadian center. “It shouldn’t be the job of external people to find this sort of content sitting on their system.”

Even more worrisome: the NY Times report notes that Twitter uses a tool from Thorn, the well known anti-trafficking organization that tries to use technology to fight trafficking. Except the report notes that, for all of Musk’s claims about how fighting this stuff is job number one… he stopped paying Thorn. But, even more damning, Twitter has stopped working with Thorn to provide information back to the organization to improve its tool and to help it find and stop more CSAM:

To find the material, Twitter relies on software created by an anti-trafficking organization called Thorn. Twitter has not paid the organization since Mr. Musk took over, according to people familiar with the relationship, presumably part of his larger effort to cut costs. Twitter has also stopped working with Thorn to improve the technology. The collaboration had industrywide benefits because other companies use the software.

Also eye-opening in the article is that, while Twitter is claiming that it is removing more such content than ever, its reports to NCMEC do not match that and have dropped massively, raising serious concerns at NCMEC:

The company has not reported to the national center the hundreds of thousands of accounts it has suspended because the rules require that they “have high confidence that the person is knowingly transmitting” the illegal imagery and those accounts did not meet that threshold, Ms. Irwin said.

Mr. Shehan of the national center disputed that interpretation of the rules, noting that tech companies are also legally required to report users even if they only claim to sell or solicit the material. So far, the national center’s data show, Twitter has made about 8,000 reports monthly, a small fraction of the accounts it has suspended.

Also, NCMEC saw that Twitter’s responsiveness dwindled (though in January seemed to pick back up a bit):

After the transition to Mr. Musk’s ownership, Twitter initially reacted more slowly to the center’s notifications of sexual abuse content, according to data from the center, a delay of great importance to abuse survivors, who are revictimized with every new post. Twitter, like other social media sites, has a two-way relationship with the center. The site notifies the center (which can then notify law enforcement) when it is made aware of illegal content. And when the center learns of illegal content on Twitter, it alerts the site so the images and accounts can be removed.

Late last year, the company’s response time was more than double what it had been during the same period a year earlier under the prior ownership, even though the center sent it fewer alerts. In December 2021, Twitter took an average of 1.6 days to respond to 98 notices; last December, after Mr. Musk took over the company, it took 3.5 days to respond to 55. By January, it had greatly improved, taking 1.3 days to respond to 82.

The Canadian center, which serves the same function in that country, said it had seen delays as long as a week. In one instance, the Canadian center detected a video on Jan. 6 depicting the abuse of a naked girl, age 8 to 10. The organization said it sent out daily notices for about a week before Twitter removed the video.

None of this is particularly encouraging, especially on a topic so important.

It also appears that foreign regulators may be taking notice as well:

Ms. Inman Grant, the Australian regulator, said she had been unable to communicate with local representatives of the company because her agency’s contacts in Australia had quit or been fired since Mr. Musk took over. She feared that the staff reductions could lead to more trafficking in exploitative imagery.

“These local contacts play a vital role in addressing time-sensitive matters,” said Ms. Inman Grant, who was previously a safety executive at both Twitter and Microsoft.

Again, dealing with CSAM is one of the most critical, and challenging, parts of any trust & safety team for any website that allows user content. There is no “perfect” solution. And there will always be scenarios where some content is missed. So, in general, I’ve been hesitant to highlight articles (which come along with some frequency) insisting that because reporters or researchers are able to find some CSAM it means that the site “isn’t doing enough.” Because that’s rarely an accurate portrayal.

However, this NY Times piece goes way beyond that. It didn’t just find content, it found empirical evidence of Twitter being slower to react than in the past, not reporting the material it should be reporting to the agencies set up for that purpose, cutting off Thorn from both money and collaboration data, and many other things.

All of which adds up to pretty compelling evidence that for all of Musk’s lofty talk of fighting CSAM being job number one, the company has actually gone not just a little backwards on this issue, but dangerously so.

Filed Under: csam, dick durbin, ella irwin, elon musk
Companies: thorn, twitter