As Elon Fires More Trust & Safety Staff, Twitter’s Moderation Efforts Fall Apart (original) (raw)
from the this-is-infruriating dept
Despite having already fired a huge percentage of Twitter’s trust & safety team handling issues around content moderation, including the teams handling child sexual abuse material and election denialism, last week Elon apparently fired another chunk of the team. Just in time for organizers of the insurrection in Brazil to make use of social media to help them organize.
Researchers in Brazil said Twitter in particular was a place to watch because it is heavily used by a circle of right-wing influencers — Bolsonaro allies who continue to promote election fraud narratives. Several influencers have had their accounts banned in Brazil and now reside in the United States. Bolsonaro himself was on vacation in Florida on Sunday.
Still, as the article notes, the planning seemed to happen on many platforms, so it’s not as if Twitter was the only one. But perhaps more serious is the issue of child sexual abuse material. There has been this weird narrative making the rounds that Twitter, under the previous regime, did not take the issue seriously. And that since Elon took over, it has done much more to stop CSAM. Both parts of this narrative appear to be false.
Experts who used to work with Twitter specifically on this issue say that the teams working on it have been mostly fired, as Elon insists that automation will somehow work in their place (note: automation is important in finding repeat content that has been added to various databases, but… not good at all at catching new content). It does not sound like things are going well.
The ex-employee outlined to CNA how automated machine-learning models often struggle to catch up with the evolving modus operandi of perpetrators of child sexual abuse material.
Trading such content on Twitter involves treading a fine line between being obvious enough to prospective “buyers” yet subtle enough to avoid detection.
In practice, this means speaking in codewords that are ever-changing, to try and evade enforcement.
And so abusers are able to stay ahead of Twitter’s efforts:
With fewer content moderators and domain specialists in Twitter to keep track of such changes, there’s a danger that abusers will take the opportunity to coordinate yet another new set of codewords that automated systems and a smaller team cannot quickly pick up, said the ex-employee.
We’ve also heard some other disturbing claims from inside Twitter, including that Twitter has cut back, significantly, on the support that its trust & safety staff get, such as important and necessary counseling support for frontline workers who deal with these issues. This is, of course, always the awful tradeoff with these kinds of roles and jobs. You need some people in the process, but it’s a terrible job which can create real post-traumatic stress for those employees.
The same article notes that the automated takedowns are actually causing other problems, like suppressing victims speaking out about what happened to them:
“A victim drawing attention to their plight, having no easy way to do so and in a compromised situation or state of mind, might easily use problematic hashtags and keywords,” they said.
Failure to distinguish such uses of language could conversely end up silencing and re-victimising those suffering from child sexual abuse, they said.
Indeed, after that article came out, an NBC investigation showed that (again, contrary to the narrative), it does not appear that new Twitter is particularly effective in dealing with the issue of CSAM.
The accounts seen by NBC News promoting the sale of CSAM follow a known pattern. NBC News found tweets posted as far back as October promoting the trade of CSAM that are still live — seemingly not detected by Twitter — and hashtags that have become rallying points for users to provide information on how to connect on other internet platforms to trade, buy and sell the exploitative material.
In the tweets seen by NBC News, users claiming to sell CSAM were able to avoid moderation with thinly veiled terms, hashtags and codes that can easily be deciphered.
Some of the tweets are brazen and their intention was clearly identifiable (NBC News is not publishing details about those tweets and hashtags so as not to further amplify their reach). While the common abbreviation “CP,” a ubiquitous shortening of “child porn” used widely online, is unsearchable on Twitter, one user who had posted 20 tweets promoting their materials used another searchable hashtag and wrote “Selling all CP collection,” in a tweet published on Dec. 28. The tweet remained up for a week until the account appeared to be suspended following NBC News’ outreach to Twitter. A search Friday found similar tweets still remaining on the platform. Others used keywords associated with children, replacing certain letters with punctuation marks like asterisks, instructing users to direct message their accounts. Some accounts even included prices in the account bios and tweets.
CSAM is a massive issue across any social media platform. There is no “solution” to it that will stop it from happening, but it’s an ever evolving challenge that many companies work on, using ever changing approaches to deal with the fact that the perpetrators are constantly adapting as well. Twitter used to be one of the leading companies in responding to this challenge, but now it appears that the opposite is true.
Filed Under: content moderation, csam, elon musk, trust & safety
Companies: twitter