AI-generated child sexual abuse: The new digital threat we must confront now
August 13, 2025
6 Minute Read
Generative AI is already shaping how we create, share, and consume content online.
These tools can produce new images, videos, text, and audio in seconds, and often with just a single prompt.
While this technology unlocks exciting possibilities, it’s also opening the door to urgent, unprecedented risks to children’s safety.
At Thorn, we’re already seeing the ways generative AI is being misused to exploit and abuse children. But we also know we’re in a critical window to act. If the whole ecosystem – including policymakers, platforms, child protection organizations, and others – acts now, we have the chance to shape this technology before these harms become even more widespread.
Here’s what we see happening today, and what must happen next.
Why hyper-realistic, instantly generated AI imagery is an urgent risk for children
Artificial intelligence isn’t new.
In fact, at Thorn, we’ve been leveraging artificial intelligence and machine learning to fight child sexual abuse and exploitation for over a decade. Our tools use predictive AI to detect child sexual abuse and exploitation at scale. This helps investigators identify more child victims of abuse faster, and disrupts the spread of child sexual abuse on tech platforms.
But what’s new and profoundly different is the explosion of easy-to-use generative AI tools, capable of creating hyper-realistic synthetic content. Suddenly, anyone, anywhere can exploit children with just a few clicks.
The technology itself isn’t new. What’s new is how accessible and widespread it has become—and how photorealistic synthetic images now are, making it harder than ever to distinguish AI-generated visuals from real ones. This rapid evolution, along with the speed and scale at which harm can spread, poses significant challenges for protecting children.
How generative AI is already being misused to sexually exploit children
Deepfake nudes and AI-generated CSAM
Perpetrators are increasingly using generative AI to create sexually explicit synthetic images of real children, known as AI-generated child sexual abuse material (AIG-CSAM). This includes both fully fabricated images and “deepfake nudes”: real photos of children digitally altered to depict them in sexually explicit ways—without their knowledge or consent.
These violations are not hypothetical. They’re already happening—and in many cases, the perpetrators aren’t strangers, but peers.
Nudifiers – and why they’re a problem
AI-powered “nudify” tools and image generators are widely available online and allow users to digitally undress or sexualize real photos, often in seconds. These tools are being marketed widely: in 2024, ads for nudifiers even appeared on mainstream platforms, which faced public backlash over their role in spreading these tools through search results and ad placements.
Peer misuse and school-based harms
Children themselves are increasingly misusing nudify apps to target their classmates. These images often begin as innocent school portraits or social media photos, then get altered with AI tools to show kids in explicit ways. It’s not just theoretical—this is already happening in schools across the country. In one Thorn study, 1 in 10 minors said they personally know someone who has used AI tools to generate nude images of other kids.
The consequences are severe. The content may be fake, but the trauma is real. Victims experience deep emotional harm, including anxiety, social isolation, bullying, and long-term reputational damage. In some cases, schools have had to involve law enforcement or take disciplinary action, while also grappling with how to create policies and education programs that can keep up with rapidly evolving technology.
A crisis of scale and realism
Deepfake nudes are especially dangerous because they appear disturbingly real—blurring the line between synthetic and authentic abuse. Whether the image was generated by a camera or a computer, the psychological toll on victims is often the same.
And as these tools become more realistic and more accessible, existing child protection systems risk becoming overwhelmed. Investigators already face a needle-in-a-haystack problem when trying to identify children in active harm. The influx of AI-generated abuse content only increases that haystack—clogging forensic workflows and making it harder for law enforcement to triage cases, prioritize real victims, and remove them from harm as quickly as possible. AIG-CSAM doesn’t just create new harm; it makes it harder to detect and respond to existing harm.
AI-enabled sextortion
We’ve also seen AI-generated nudes used in sextortion scams. Offenders may create a fake nude of a child, then use it to threaten or extort them for more explicit content or money. Even if the image was synthetically produced, the fear, shame, and manipulation inflicted on the victim are very real.
Thorn’s approach to tackling the child safety risks of generative AI
Generative AI is ushering in new forms of sexual abuse- and revictimization- at an alarming pace.
This is a threat that’s happening right now. And as AI capabilities advance, we risk falling further behind unless we act.
At Thorn, we believe it’s possible to build AI systems with safeguards in place from the start. That’s why we’re working directly with tech companies to embed Safety by Design principles into the development and deployment of generative AI systems. Safety should be a foundation- not an afterthought.
We’re also advocating for policy efforts to ensure that AI-generated child sexual abuse is both recognized under the law as illegal, and proactively addressed before it spreads. At the federal level, current statutes cover much of this activity, but gaps remain. At the state level, additional legislative clarity is often needed.
Creating or sharing AI-generated child sexual abuse material (AIG-CSAM) is illegal under federal U.S. law, which prohibits obscene content involving minors—even if computer-generated. While many states are still updating their laws to explicitly address AI-generated intimate images, arrests have already been made in cases involving the distribution of deepfake nudes of high school students. In most jurisdictions, sharing or generating these images—especially of minors—can lead to criminal charges, including for teens who misuse these tools against their peers.
For a real-world example, see this NYT article on AI-generated child sexual abuse and legal gaps.
Most importantly, we’re helping people understand that AI-generated CSAM is not “fake” abuse. It causes real harm to real children, and it will take collective action to keep them safe.
What you can do now
If you’re a parent or caregiver:
Start early, stay open, and keep talking. Judgment-free, ongoing conversations help kids feel safe coming to you when something doesn’t feel right—especially in a digital world that’s evolving faster than any of us can keep up with. Ask questions, listen closely, and let them know they can always turn to you, no matter what.
If you’re unsure how to begin, our free Navigating Deepfake Nudes Guide offers expert-backed scripts and practical steps to navigate these conversations with confidence.
If you work at a company building or deploying generative AI:
You have the power and the responsibility to help prevent harm before it happens. Commit to building with Safety by Design in mind. Evaluate how your tools could be misused to generate harmful or abusive content, and take action. Learn more about Thorn’s Safety by Design project here.