Skip to content

Generative AI: Now is the Time for Safety By Design

May 26, 2023

5 Minute Read

These days, it seems that breaking news about new generative AI technology is everywhere. 

Generative AI represents a true paradigm shift in our ability to create content and ideas, with the potential to have significant implications on the ways in which we work and live. 

In the child safety ecosystem, generative AI also changes how child sexual abuse occurs online and how we can combat it. 

As tech companies and organizations race to keep up with new generative AI advancements, Thorn is working hard to defend children as it develops – just as we do with all other technologies as they emerge.

The rising prevalence and use of this tech is already presenting many implications for child safety – most of which the child safety ecosystem at large is just beginning to understand and analyze. 

However, one thing we do know at this juncture is that now is the time for safety by design, and AI companies must lead the way to ensure children are protected as generative AI tech is built. 

Now is Our Chance

While everything we know indicates that this new wave of generative AI technology could pose serious threats to children, we do see a silver lining: we’re presented with a unique opportunity to act now to put child safety at the center of this technology as it emerges. 

In fact, we believe we are at an opportune intervention moment in this space – and that now is the time for safety by design. 

Safety by design encourages thoughtful development: rather than retrofitting safeguards after an issue has occurred, technology companies should be considering how to minimize threats and harms throughout the development process. For generative AI, this concept should be expanded to the entire lifecycle of machine learning (ML)/AI: develop, deploy, and maintain. Each of these parts of the process includes opportunities to prioritize child safety.

When considering development, some tactical steps that can be taken include:

  • Remove harmful content from training data, e.g. hashing and matching the data against hash sets of known CSAM, or using classifiers and manual review.
  • Engage in “red teaming” sessions (the practice of stress testing systems – physical or digital – to find flaws, weaknesses, gaps, and edge cases) to pressure test particular themes and content, e.g., what prompts produce AI-generated (AIG) child sexual abuse material (CSAM).
  • Incorporate technical barriers to producing harmful content, e.g., biasing the model against outputting child nudity or sexual content involving children.
  • Be transparent with training sets (especially in the open source setting) so that collaborators can independently audit/assess the content for harmful content.

When considering deployment:

  • For cloud-based systems, incorporate harmful content detection at the inputs and outputs of that system, e.g., detecting prompts intended to produce AIG-CSAM and detecting AIG-CSAM that may have been produced.
  • For open-source systems, evaluate which platforms you allow to share your technology, e.g. determine if those platforms knowingly host models that generate harmful content.
  • For platforms that share models developed by other organizations and persons, evaluate which models you allow to be hosted on your platform, e.g., only host models that have been developed with child safety in mind.
  • In all cases, pursue content provenance solutions that occur as part of development rather than as an optional post-processing step, e.g., training a watermark into the decoder of the model itself or releasing ML/AI solutions that can reliably predict the synthetic nature of the content.

When considering maintenance:

  • As newer models are developed and deployed with safety by design principles, remove access to historical models.
  • Proactively ensure synthetic content detection solutions are performant on the content generated by the newer models.
  • Actively collaborate with special interest groups to understand how your models are being misused.
  • For cloud-based systems, include clear pathways to report violations to the proper governing authority.
  • Share known hashes of AIG-CSAM and known inputs that produce harmful content discovered in this process with the child safety ecosystem.

How Thorn Helps

Navigating the landscape of emerging technology isn’t new to Thorn; for more than a decade, we’ve stayed ahead of the curve, vigilant to the implications new technologies may have for children, and using technology solutions to solve technology problems.

We’re unique in this space, bridging knowledge and fostering collaborations across the entire child safety ecosystem. At the same time, our dedicated team of data scientists is solely focused on understanding the landscape and developing solutions to the unique safety challenges we face, including the challenges new technology like generative AI presents.

One way we facilitate this is through our Safer product, a solution designed to help companies detect, identify, and report CSAM at scale. 

With resources like Safer, along with our consulting expertise, our strong relationships within the child safety space, and other targeted offerings – such as red teaming sessions – for platform safety, Thorn is uniquely positioned to work together with the generative AI sector to put child safety at the forefront of innovation.

We are also encouraged by the fact that many key players in the generative AI space are willing to work together with Thorn – as well as others in the ecosystem, including the National Center for Missing and Exploited Children (NCMEC), the Tech Coalition, and others – to safeguard children and build responsibly. 

For instance, OpenAI integrated Safer into its DALL·E 2 generative AI web app. This collaboration exemplifies how we can proactively address potential threats, building safeguards into the technology’s foundational structure.

As we continue to monitor and adapt to the evolving AI landscape, we’re always ready to assist others in doing the same with our own solutions and AI technology. We believe in the power of collective action. By creating strong partnerships, sharing knowledge, and leveraging relationships, we can ensure a safer technological future for children. And it all starts with us, together, prioritizing safety by design.



Stay up to date

Want Thorn articles and news delivered directly to your inbox? Subscribe to our newsletter.