British Tech Firms and Child Safety Officials to Test AI's Capability to Create Exploitation Images

Technology companies and child protection organizations will be granted authority to assess whether artificial intelligence systems can generate child exploitation material under new British laws.

Significant Rise in AI-Generated Harmful Material

The announcement coincided with findings from a safety watchdog showing that reports of AI-generated child sexual abuse material have increased dramatically in the last twelve months, growing from 199 in 2024 to 426 in 2025.

Updated Regulatory Structure

Under the amendments, the authorities will permit designated AI companies and child safety organizations to inspect AI models – the foundational technology for conversational AI and visual AI tools – and verify they have sufficient protective measures to stop them from creating images of child sexual abuse.

"Ultimately about preventing abuse before it occurs," declared Kanishka Narayan, noting: "Specialists, under rigorous protocols, can now identify the risk in AI systems early."

Addressing Legal Obstacles

The changes have been implemented because it is against the law to produce and own CSAM, meaning that AI developers and others cannot generate such images as part of a evaluation process. Until now, authorities had to wait until AI-generated CSAM was published online before dealing with it.

This legislation is designed to preventing that problem by helping to stop the production of those materials at source.

Legal Framework

The amendments are being added by the authorities as revisions to the crime and policing bill, which is also implementing a prohibition on owning, producing or sharing AI systems designed to generate child sexual abuse material.

Real-World Impact

This recently, the official visited the London base of a children's helpline and listened to a mock-up call to counsellors involving a account of AI-based exploitation. The interaction portrayed a adolescent seeking help after facing extortion using a explicit deepfake of themselves, constructed using AI.

"When I learn about children experiencing blackmail online, it is a source of extreme anger in me and rightful concern amongst parents," he stated.

Concerning Statistics

A leading online safety foundation stated that cases of AI-generated exploitation material – such as webpages that may contain multiple images – had significantly increased so far this year.

Cases of the most severe content – the gravest form of abuse – rose from 2,621 images or videos to 3,086.

  • Girls were predominantly targeted, making up 94% of illegal AI images in 2025
  • Depictions of infants to toddlers increased from five in 2024 to 92 in 2025

Sector Response

The law change could "represent a crucial step to guarantee AI tools are safe before they are released," stated the chief executive of the online safety organization.

"Artificial intelligence systems have made it so victims can be targeted repeatedly with just a few clicks, giving offenders the ability to create possibly limitless quantities of sophisticated, lifelike child sexual abuse material," she added. "Content which further commodifies survivors' suffering, and makes young people, particularly girls, more vulnerable both online and offline."

Counseling Session Data

The children's helpline also released details of support interactions where AI has been mentioned. AI-related harms discussed in the sessions comprise:

  • Employing AI to evaluate body size, physique and appearance
  • Chatbots dissuading children from talking to trusted adults about abuse
  • Being bullied online with AI-generated content
  • Online blackmail using AI-faked images

Between April and September this year, the helpline conducted 367 counselling sessions where AI, conversational AI and related terms were discussed, significantly more as many as in the equivalent timeframe last year.

Fifty percent of the references of AI in the 2025 interactions were related to psychological wellbeing and wellbeing, including utilizing chatbots for assistance and AI therapy apps.

Chase Pierce
Chase Pierce

Seasoned blackjack enthusiast and strategy coach with over a decade of experience in casino gaming.