Jade Abbott (original) (raw)

Technology platforms, digital rights nonprofits, law enforcement agencies, and policymakers all agree on very few things. But on that short list is the moral imperative to fight child pornography.

In 2023, the National Center for Missing and Exploited Children (NCMEC), a US nonprofit that manages the national clearinghouse for suspected cases of child victimization, received over 36.2 million reports containing over 105 million files. While some tools exist to help workers sift through them, much is still done manually—a task that is slow, tedious, and emotionally challenging.

Rebecca Portnoff, 34, has spent the past decade trying to solve this problem. Portnoff is the vice president of data science at Thorn, a technology nonprofit focused on fighting child exploitation. She began working on this issue while completing her doctorate in computer science at the University of California, Berkeley. Today, Portnoff leads a team of seven data scientists who apply machine learning algorithms to identify child pornography, spot potential victims, and flag grooming behaviors.

Portnoff developed Safer, a tool that lets tech platforms and frontline organizations scan images for known examples of child pornography using cryptographic and perceptual hashing, then report them to NCMEC. A new version of the tool also incorporates natural language processing to identify text conversations aimed at grooming new victims. Creating it required Portnoff’s team to develop new training data sets, and the resulting text classifier will launch this year.

Portnoff also keeps an eye on emerging threats, such as generative AI. Current tools typically combat child pornography by comparing files to already identified pornographic images or videos. They are not designed to detect newly created ones, nor can they distinguish between real and AI-generated images of victims, which may require law enforcement officials to spend extra time determining their authenticity.

Portnoff put together a working group to study the impact of generative AI, and last year she co-published a paper with the Stanford Internet Observatory that documented a small but significant uptick in AI-generated child pornography. As a result, she drafted guidelines for preventing AI tools from generating and spreading child pornography and persuaded 10 big tech and AI companies, including OpenAI, Anthropic, Amazon, and Meta, to commit to its principles.

It’s this systems approach to tackling the problem at all stages—from identifying harmful content that’s already out there, to making it harder to create more—that will ultimately make the difference, Portnoff says, so that “we’re not always playing Whac-A-Mole.”