Skip to content

How Safer’s detection technology stops the spread of CSAM

August 14, 2020

4 Minute Read

Did you miss the big news?

Last month we announced that Safer, Thorn’s first commercial product, is officially out of beta. Safer is the first comprehensive solution for companies hosting user-generated content to identify, remove, and report child sexual abuse material (CSAM) at scale.

Today we’re looking at how Safer’s detection service identifies CSAM files through a process known as hashing, and how the addition of a machine learning classifier supercharges this process.

 

What is hashing?

In its simplest form, TechTerms.com defines a hash as “a function that converts one value to another.”

Hashing is used for all kinds of things in computer science, but in the case of Safer it converts a file—such as an image—into a set of values that is unique to that file.

A hash is similar to a fingerprint — just as a fingerprint can be used to identify a person even when they aren’t physically present, a hash can be used to identify a file without having to actually look at it.

 

How do hashes help Safer identify CSAM?

Once an image has a fingerprint, it can be compared to the fingerprints of files that we already know are CSAM using a large and growing database of hashes from the National Center for Missing and Exploited Children (aka NCMEC — the clearinghouse for reports of CSAM in the U.S.) and SaferList, where hashes of previously unknown CSAM are reported to NCMEC by Safer customers and are added back into the Safer ecosystem for other customers to match against.

That means that Safer can flag whether an uploaded image might be CSAM in real-time and at the point of upload.

While Safer uses two types of hashing, cryptographic and perceptual, understanding perceptual hashing goes a long way in understanding how Safer can be so efficient. Perceptual hashing means the tool is able to match images based on how similar their fingerprints are. This allows Safer to identify altered images and determine that they are in fact the same, giving platforms the ability to detect more content faster.

Hashing CSAM files also ensures platforms can surface abuse content while maintaining user privacy, and because CSAM is illegal, once it’s detected Safer ensures platforms are able to take appropriate actions, including complying with regulatory obligations to report it.

Hashing automates and streamlines the process while protecting content moderators from viewing traumatic content unnecessarily. Human moderators must review content to validate whether it is indeed CSAM, and by reducing exposure it both protects moderators from trauma and protects victims from their content being viewed any more often than necessary.

It’s also important to remember that behind every matched CSAM file is a human being—a victim of a crime. Protecting this data means protecting a child, and hashing allows for the removal and reporting of CSAM while safeguarding victims.

 

How does a classifier help?

Hashing is a critical tool in identifying, removing, and reporting CSAM. But up until now there has been a key limitation: the technology can only match hashes if we already know it’s CSAM.

Broadly speaking, classifiers use machine learning to sort data into categories automatically. When an email goes to your spam folder, there’s a classifier at work. It has been trained on data to determine which emails are most likely to be spam, and which are not, and as it’s fed more of those emails, and users continue to tell it if it’s right or wrong, it gets better and better at sorting them. The power these classifiers unlock is the ability to label new, never seen before data, by using what it has learned from historical emails, to predict whether new emails are likely to be spam.

In the same way, in Safer’s case, training a classifier can help companies determine instances where an image is potentially new or unreported CSAM—files that may not have been reported to or processed by NCMEC yet.

This is an exciting step forward. 

 As more companies utilize Safer’s detection services, including the new classifier, we’re excited to provide companies a comprehensive solution for finding both known and unknown CSAM.

 

Let’s build the internet we deserve

That got pretty technical, but perhaps the most unique value Safer delivers is that it’s a technical solution that has a very human impact.

It’s an interesting technological challenge to solve, but every time it’s solved it represents a survivor whose image is no longer in circulation, and in some cases a child being removed from immediate harm.

When we combine technology with humanity, we take a step closer to building an internet where every child can be safe, curious, and happy. That’s the internet we deserve.



Stay up to date

Want Thorn articles and news delivered directly to your inbox? Subscribe to our newsletter.