Skip to content

How Thorn’s classifiers use artificial intelligence to build a safer internet

July 11, 2023

5 Minute Read

Child sexual abuse material (CSAM) is the documentation of the horrific sexual abuse of children. In 2023 alone, the National Center for Missing and Exploited Children (NCMEC)’s CyberTipline received more than 36.2 million reports of suspected child sexual exploitation. The viral spread of CSAM is an exponential problem that we at Thorn and others in the child safety ecosystem are working tirelessly to resolve. 

New CSAM is produced and uploaded to online platforms every day, and often represents a child who is actively being abused. CSAM that goes undetected poses significant risks for widespread sharing across the web, contributing to revictimization of the child in the material. 

Unfortunately, some online platforms don’t proactively detect CSAM and only rely on user reporting. Other platforms that do detect can only find existing CSAM. Thorn’s classifiers are unique in that they detect unknown CSAM — meaning material that already existed but wasn’t classified as CSAM, yet — as well as text-based child sexual exploitation; that is, conversations related to CSAM, grooming, sextortion and other sexual harms against children.

As you can imagine, the sheer volume of CSAM and messages to be reviewed and assessed far outweighs the number of human moderators and hours in the day. So how do we solve this problem?

In order to help find and rescue the children who are being sexually abused in this material, we need to use a robust set of tools including classifiers.

What is a classifier exactly?

Classifiers are algorithms that use machine learning to sort data into categories automatically. 

For example, when an email goes to your spam folder, there’s a classifier at work. 

It has been trained on data to determine which emails are most likely to be spam and which are not. As it is fed more of those emails, and users continue to tell it if it is right or wrong, it gets better and better at sorting them. 

The power these classifiers unlock is the ability to label new data by using what it has learned from historical data — in this case to predict whether new emails are likely to be spam. 

Spam email being flagged amidst regular emailsHow do Thorn’s classifiers work?

Thorn’s classifiers are incredible machine learning models that can find new or unknown CSAM in both images and videos, as well as text-based child sexual exploitation.

Here’s how different partners across the child protection ecosystem use this technology:

  • Law enforcement can identify victims faster as the classifier elevates unknown CSAM images and videos during investigations.
  • NGOs can help identify victims and connect them to support resources faster.
  • Online platforms can expand detection capabilities and scale the discovery of previously unseen or unreported CSAM as well as conversations about or that may lead to child sexual abuse  by deploying the classifiers when they utilize Safer Predict, part of our all-in-one CSAM detection solution, Safer

The child protection ecosystem is comprised of NGOs, online platforms, and law enforcement--all working together for the safety of children.As previously mentioned, some online platforms don’t proactively detect CSAM and only rely on user reporting. Other platforms use hashing and matching, which can only find existing CSAM. That’s why Thorn’s technology is a game-changer — we built classifiers to detect unknown CSAM and text-based child sexual exploitation (CSE)

Safer, our all-in-one solution for CSAM and CSE detection, combines advanced AI technology with a self-hosted deployment to detect, review, and report CSAM at scale. In 2023, Safer made a significant impact for our customers, with 1,546,097  files classified as potential CSAM.

In 2023, 3,833,792 total CSAM files were detected, 2,287,695 CSAM files were matched, and 1,549,097 files were classified as potential CSAM thanks to Safer.

How does this technology help real people?

Finding new and unknown CSAM and conversations related to child sexual abuse often relies on manual processes that place the burden on human reviewers, or user reports. To put it in perspective, you would need a team of hundreds of people with limitless hours to achieve what a classifier can do through automation.

Because new CSAM can represent a child who is actively being abused and conversations can indicate potential exploitation, utilizing classifiers can significantly reduce the time it takes to find a victim and remove them from harm.

The CSAM classifier significantly speeds up the processA Flickr Success Story

In fact, image and video hosting site Flickr uses Thorn’s CSAM Classifier to help their reviewers sort through the mountain of new content that gets uploaded to their site every day. 

As Flickr’s Trust and Safety Manager, Jace Pomales, summarized it, “We don’t have a million bodies to throw at this problem, so having the right tooling is really important to us.”

One recent classifier hit led to the discovery of 2,000 previously unknown images of CSAM. Once reported to the NCMEC, law enforcement conducted an investigation, and a child was rescued from active abuse. That’s the power of this life-changing technology.

Technology must be part of the solution if we are to stay ahead of the threats children face in a rapidly changing world. Whether through our products or programs, we embrace the latest tools and expertise to make the world safer so that every child can simply be a kid. It’s because of our generous supporters and donors that our work is possible. Thank you for believing in this important mission.

If you work in the technology industry and are interested in utilizing Safer and the CSAM Classifier for your online platform, please contact info@safer.io. If you work in law enforcement, you can contact info@thorn.org or fill out this application.



Stay up to date

Want Thorn articles and news delivered directly to your inbox? Subscribe to our newsletter.