Skip to content

How Thorn’s CSAM classifier uses artificial intelligence to build a safer internet

July 11, 2023

5 Minute Read

Child sexual abuse material (CSAM) is the documentation of the horrific sexual abuse of children. In 2022 alone, the National Center for Missing and Exploited Children (NCMEC)’s CyberTipline received more than 32 million reports of suspected child sexual exploitation. The viral spread of CSAM is an exponential problem that we at Thorn and others in the child safety ecosystem are working tirelessly to resolve. 

New CSAM is produced and uploaded to online platforms every day, and often represents a child who is actively being abused. CSAM that goes undetected poses significant risks for widespread sharing across the web, contributing to revictimization of the child in the material. 

Unfortunately, some online platforms don’t proactively detect CSAM and only rely on user reporting. Other platforms that do detect can only find existing CSAM. Thorn’s CSAM classifier is unique in that it detects unknown CSAMmeaning material that already existed but wasn’t classified as CSAM, yet.

As you can imagine, the sheer volume of CSAM to be reviewed and assessed far outweighs the number of human moderators and hours in the day. So how do we solve this problem?

In order to help find and rescue the children who are being sexually abused in this material, we need to use a robust set of tools including classifiers.

What is a classifier exactly?

Classifiers are algorithms that use machine learning to sort data into categories automatically. 

For example, when an email goes to your spam folder, there’s a classifier at work. 

It has been trained on data to determine which emails are most likely to be spam and which are not. As it is fed more of those emails, and users continue to tell it if it is right or wrong, it gets better and better at sorting them. 

The power these classifiers unlock is the ability to label new data by using what it has learned from historical data — in this case to predict whether new emails are likely to be spam. 

Spam email being flagged amidst regular emailsHow does Thorn’s CSAM Classifier work?

Thorn’s CSAM Classifier is an incredible machine learning-based tool that can find new or unknown CSAM in both images and videos. When potential CSAM is flagged for moderator review and the moderator confirms if it is or is not CSAM, the classifier learns. It continually improves from this feedback loop so it can get even smarter at detecting new material.

Here’s how different partners across the child protection ecosystem use this technology:

  • Law enforcement can identify victims faster as the classifier elevates unknown CSAM images and videos during investigations.
  • NGOs can help identify victims and connect them to support resources faster.
  • Online platforms can expand detection capabilities and scale the discovery of previously unseen or unreported CSAM by deploying the Classifier when they utilize Safer, our all-in-one CSAM detection solution. 

The child protection ecosystem is comprised of NGOs, online platforms, and law enforcement--all working together for the safety of children.As previously mentioned, some online platforms don’t proactively detect CSAM and only rely on user reporting. Other platforms use hashing and matching, which can only find existing CSAM. That’s why Thorn’s technology is a game-changer — we built a CSAM classifier to detect unknown CSAM.

Safer, our all-in-one solution for CSAM detection, combines advanced AI technology with a self-hosted deployment to detect, review, and report CSAM at scale. In 2022, Safer made a significant impact for our customers, with 304,466 images classified as potential CSAM, and 15,238 videos classified as potential CSAM.

In 2022, 304,466 images and 15,239 videos were classified as potential CSAM thanks to Safer.How does this technology help real people?

Finding new and unknown CSAM often relies on manual processes that place the burden on human reviewers, or user reports. To put it in perspective, you would need a team of hundreds of people with limitless hours to achieve what a CSAM Classifier can do through automation. 

Because new CSAM can represent a child who is actively being abused, utilizing CSAM classifiers can significantly reduce the time it takes to find a victim and remove them from harm.

The CSAM classifier significantly speeds up the processA Flickr Success Story

In fact, image and video hosting site Flickr uses Thorn’s CSAM Classifier to help their reviewers sort through the mountain of new content that gets uploaded to their site every day. 

As Flickr’s Trust and Safety Manager, Jace Pomales, summarized it, “We don’t have a million bodies to throw at this problem, so having the right tooling is really important to us.”

One recent classifier hit led to the discovery of 2,000 previously unknown images of CSAM. Once reported to the NCMEC, law enforcement conducted an investigation, and a child was rescued from active abuse. That’s the power of this life-changing technology.

Technology must be part of the solution if we are to stay ahead of the threats children face in a rapidly changing world. Whether through our products or programs, we embrace the latest tools and expertise to make the world safer so that every child can simply be a kid. It’s because of our generous supporters and donors that our work is possible. Thank you for believing in this important mission.

If you work in the technology industry and are interested in utilizing Safer and the CSAM Classifier for your online platform, please contact info@safer.io. If you work in law enforcement, you can contact info@thorn.org or fill out this application.



Stay up to date

Want Thorn articles and news delivered directly to your inbox? Subscribe to our newsletter.