British Technology Firms and Child Safety Officials to Test AI's Capability to Generate Exploitation Content

Tech firms and child protection organizations will be granted authority to assess whether artificial intelligence systems can produce child exploitation material under new UK legislation.

Substantial Rise in AI-Generated Illegal Content

The announcement coincided with revelations from a protection watchdog showing that reports of AI-generated CSAM have increased dramatically in the last twelve months, growing from 199 in 2024 to 426 in 2025.

Updated Regulatory Framework

Under the amendments, the authorities will allow designated AI developers and child protection groups to inspect AI systems – the foundational technology for conversational AI and image generators – and verify they have adequate protective measures to prevent them from creating depictions of child exploitation.

"Ultimately about stopping abuse before it happens," declared Kanishka Narayan, noting: "Specialists, under strict conditions, can now identify the risk in AI models early."

Tackling Regulatory Obstacles

The amendments have been implemented because it is illegal to create and own CSAM, meaning that AI developers and other parties cannot generate such content as part of a evaluation process. Until now, authorities had to wait until AI-generated CSAM was uploaded online before dealing with it.

This legislation is aimed at averting that issue by enabling to halt the production of those materials at source.

Legal Framework

The changes are being introduced by the government as modifications to the crime and policing bill, which is also establishing a prohibition on owning, creating or sharing AI models developed to generate exploitative content.

Real-World Consequences

This week, the minister visited the London base of a children's helpline and heard a mock-up call to counsellors involving a report of AI-based abuse. The interaction depicted a teenager requesting help after being blackmailed using a explicit deepfake of himself, created using AI.

"When I hear about children experiencing extortion online, it is a cause of intense frustration in me and justified anger amongst families," he said.

Alarming Data

A prominent internet monitoring foundation stated that cases of AI-generated exploitation content – such as webpages that may include numerous images – had significantly increased so far this year.

Instances of the most severe material – the most serious form of exploitation – rose from 2,621 visual files to 3,086.

  • Female children were overwhelmingly victimized, making up 94% of illegal AI depictions in 2025
  • Depictions of infants to two-year-olds increased from five in 2024 to 92 in 2025

Sector Reaction

The legislative amendment could "represent a vital step to guarantee AI products are secure before they are released," stated the head of the internet monitoring organization.

"AI tools have enabled so survivors can be targeted repeatedly with just a simple actions, providing offenders the capability to create potentially limitless quantities of sophisticated, photorealistic child sexual abuse material," she added. "Material which further exploits survivors' trauma, and makes young people, especially female children, more vulnerable on and off line."

Counseling Session Data

The children's helpline also released information of counselling sessions where AI has been referenced. AI-related risks discussed in the conversations comprise:

  • Employing AI to evaluate weight, physique and appearance
  • AI assistants discouraging young people from talking to safe guardians about harm
  • Facing harassment online with AI-generated material
  • Online blackmail using AI-faked images

During April and September this year, Childline delivered 367 counselling interactions where AI, chatbots and associated terms were mentioned, significantly more as many as in the equivalent timeframe last year.

Fifty percent of the mentions of AI in the 2025 sessions were related to mental health and wellbeing, including using AI assistants for support and AI therapy applications.

Erin Wilson
Erin Wilson

Tech enthusiast and seasoned reviewer with over a decade of experience in consumer electronics and digital trends.