British Technology Companies and Child Protection Officials to Test AI's Ability to Create Exploitation Content

Technology companies and child protection organizations will receive authority to evaluate whether AI systems can produce child exploitation material under recently introduced British laws.

Substantial Rise in AI-Generated Harmful Content

The announcement coincided with findings from a protection watchdog showing that reports of AI-generated child sexual abuse material have more than doubled in the past year, growing from 199 in 2024 to 426 in 2025.

New Regulatory Structure

Under the changes, the authorities will permit approved AI companies and child safety organizations to examine AI models – the underlying technology for conversational AI and image generators – and ensure they have adequate protective measures to prevent them from producing images of child exploitation.

"Ultimately about preventing abuse before it occurs," stated the minister for AI and online safety, adding: "Experts, under rigorous conditions, can now identify the danger in AI systems early."

Addressing Regulatory Challenges

The amendments have been implemented because it is against the law to produce and possess CSAM, meaning that AI developers and others cannot generate such content as part of a testing regime. Until now, authorities had to delay action until AI-generated CSAM was uploaded online before dealing with it.

This legislation is aimed at preventing that issue by enabling to halt the creation of those materials at their origin.

Legal Structure

The amendments are being introduced by the government as modifications to the crime and policing bill, which is also establishing a ban on owning, producing or distributing AI systems developed to create exploitative content.

Real-World Consequences

This week, the official toured the London base of Childline and heard a mock-up call to advisors involving a report of AI-based exploitation. The call portrayed a teenager requesting help after being blackmailed using a explicit deepfake of himself, created using AI.

"When I hear about children facing blackmail online, it is a cause of intense frustration in me and justified concern amongst parents," he said.

Concerning Statistics

A prominent internet monitoring organization reported that instances of AI-generated abuse material – such as online pages that may contain numerous images – had more than doubled so far this year.

Instances of category A material – the most serious form of exploitation – increased from 2,621 visual files to 3,086.

  • Girls were predominantly victimized, accounting for 94% of prohibited AI images in 2025
  • Portrayals of newborns to two-year-olds increased from five in 2024 to 92 in 2025

Sector Response

The legislative amendment could "represent a vital step to guarantee AI products are secure before they are released," stated the head of the internet monitoring foundation.

"Artificial intelligence systems have enabled so survivors can be victimised repeatedly with just a simple actions, giving criminals the ability to make potentially limitless amounts of sophisticated, photorealistic child sexual abuse material," she continued. "Content which additionally exploits victims' trauma, and makes young people, particularly female children, more vulnerable on and off line."

Support Interaction Data

The children's helpline also released details of counselling interactions where AI has been mentioned. AI-related risks mentioned in the sessions comprise:

  • Employing AI to rate body size, physique and appearance
  • Chatbots discouraging young people from consulting trusted adults about abuse
  • Facing harassment online with AI-generated content
  • Digital extortion using AI-faked pictures

During April and September this year, the helpline conducted 367 support sessions where AI, conversational AI and associated terms were discussed, significantly more as many as in the same period last year.

Half of the references of AI in the 2025 interactions were related to psychological wellbeing and wellness, including using AI assistants for support and AI therapy apps.

Kelsey Short
Kelsey Short

Cybersecurity expert with over a decade of experience in digital identity and password management, dedicated to helping users stay safe online.