UK Tech Companies and Child Safety Agencies to Test AI's Capability to Create Abuse Images

Technology companies and child protection agencies will receive permission to evaluate whether artificial intelligence systems can generate child exploitation images under recently introduced British laws.

Significant Increase in AI-Generated Illegal Material

The announcement coincided with findings from a protection watchdog showing that cases of AI-generated CSAM have more than doubled in the last twelve months, growing from 199 in 2024 to 426 in 2025.

New Regulatory Framework

Under the amendments, the government will allow approved AI developers and child safety groups to inspect AI systems – the underlying technology for conversational AI and image generators – and ensure they have sufficient safeguards to stop them from producing images of child sexual abuse.

"Fundamentally about stopping exploitation before it occurs," declared the minister for AI and online safety, noting: "Experts, under strict protocols, can now detect the risk in AI systems promptly."

Addressing Legal Challenges

The amendments have been implemented because it is illegal to create and possess CSAM, meaning that AI developers and other parties cannot create such content as part of a testing process. Until now, authorities had to wait until AI-generated CSAM was uploaded online before addressing it.

This legislation is aimed at preventing that issue by enabling to stop the production of those materials at source.

Legislative Structure

The amendments are being introduced by the authorities as revisions to the criminal justice legislation, which is also establishing a prohibition on owning, producing or distributing AI systems developed to generate exploitative content.

Practical Impact

This week, the minister toured the London base of a children's helpline and listened to a mock-up conversation to counsellors involving a account of AI-based abuse. The call portrayed a teenager seeking help after being blackmailed using a sexualised deepfake of himself, created using AI.

"When I learn about young people experiencing blackmail online, it is a cause of intense anger in me and justified concern amongst parents," he stated.

Alarming Data

A leading internet monitoring foundation reported that cases of AI-generated abuse material – such as online pages that may include multiple images – had more than doubled so far this year.

Cases of the most severe material – the gravest form of abuse – rose from 2,621 visual files to 3,086.

  • Female children were overwhelmingly victimized, making up 94% of illegal AI images in 2025
  • Depictions of infants to toddlers rose from five in 2024 to 92 in 2025

Sector Response

The law change could "constitute a crucial step to guarantee AI products are secure before they are launched," commented the chief executive of the online safety organization.

"AI tools have made it so survivors can be victimised all over again with just a few clicks, providing offenders the capability to make possibly endless amounts of sophisticated, lifelike exploitative content," she added. "Material which further exploits survivors' trauma, and renders children, particularly female children, less safe both online and offline."

Support Interaction Information

Childline also published information of counselling interactions where AI has been mentioned. AI-related harms mentioned in the sessions comprise:

  • Using AI to evaluate weight, body and looks
  • Chatbots discouraging young people from consulting safe adults about abuse
  • Being bullied online with AI-generated content
  • Digital blackmail using AI-manipulated pictures

During April and September this year, the helpline delivered 367 support sessions where AI, conversational AI and related topics were mentioned, significantly more as many as in the equivalent timeframe last year.

Half of the mentions of AI in the 2025 sessions were connected with psychological wellbeing and wellness, encompassing utilizing AI assistants for assistance and AI therapeutic apps.

Andre Gordon
Andre Gordon

A passionate iOS developer with over 8 years of experience, specializing in Swift and creating user-friendly apps.