Skip to content

No to nudifying tools: Standing with our partners for children’s safety

February 10, 2026

3 Minute Read

Today, on Safer Internet Day, Thorn is joining child safety organizations worldwide in a unified call to prohibit nudifying tools. We’re proud to stand alongside partners including Safe Online, NCMEC, the Internet Watch Foundation, INHOPE, Child Helpline International, and We Protect Global Alliance and many others in demanding action from governments, technology companies, and communities to address this growing threat to children.

Why we’re taking this stand

Nudifying tools use AI to generate nude images from clothed photos. Though often marketed for adults, these tools are increasingly weaponized against children, creating new child sexual abuse material (CSAM) from innocent photos. 

At Thorn, we believe in working with the tech ecosystem to build safety into products from the start. But in the case of nudifying tools we believe a different approach is called for. Technology built with the purpose of creating nonconsensual intimate imagery should not exist.

What we know about the impact on young people

Thorn’s research gives us a direct window into how this threat is affecting young people. Our recent study on deepfake nudes and young people found that this isn’t an abstract or future concern—it’s a present reality:

Deepfake nudes are now part of the teen experience. 1 in 8 teens knows someone targeted by deepfake nudes, while 1 in 17 disclosed having been a direct victim of this form of abuse.

Despite recognizing harm, victims often suffer in silence. While 62% of non-victims say they would tell a parent if this happened to them, in reality, only 34% of victims actually did. This gap between intention and action reveals just how isolating this experience can be for young people.

Uncertainty about legality persists. 1 in 5 young people believes it is legal to create deepfake nudes of someone else — including of minors. This confusion underscores the urgent need for clear laws, consistent enforcement, and education.

Given this reality, we know that deepfake nude technologies, including nudifying apps, are too accessible. They’re being used against children, and the ecosystem that enables them must be disrupted.

The technology response we need

For more than two years, Thorn has worked with AI developers, platforms, and policymakers through our Safety by Design initiative to establish guardrails against the misuse of generative AI for child sexual abuse. We know what responsible development looks like, and nudifying tools fail these standards.

The technology community has a responsibility to act immediately by implementing safety by design measures that prevent the development and deployment of tools that enable this abuse, deploying robust detection systems to identify and remove nudified content, removing nudifying tools from app stores, hosting services, and search results, and cutting off payment processing and monetization for these services.

These are practical steps that responsible developers should take immediately.

A call to action

The joint statement we’re signing calls on governments to enact legislation prohibiting nudifying tools within two years, including banning their development, distribution, and commercial use, establishing criminal and civil liability for those who enable or profit from this content, and mandating accessibility blocks across platforms and services.

We recognize that legislative approaches will vary across jurisdictions and must be crafted carefully. But the core principle is clear: technology designed to create nonconsensual nude imagery of anyone, and particularly of children, should not exist.

Join the call to say no to nudifying tools

For resources on supporting young people affected by deepfake nudes, visit NCMEC’s Take It Down or explore our research on this topic.


Get the latest delivered to your inbox