UK Technology Companies and Child Protection Agencies to Test AI's Capability to Create Abuse Content
Tech firms and child safety organizations will be granted authority to assess whether artificial intelligence systems can produce child abuse images under recently introduced British legislation.
Significant Rise in AI-Generated Illegal Material
The announcement coincided with revelations from a protection watchdog showing that reports of AI-generated CSAM have more than doubled in the last twelve months, rising from 199 in 2024 to 426 in 2025.
Updated Regulatory Structure
Under the changes, the authorities will allow designated AI developers and child protection groups to examine AI models – the foundational systems for conversational AI and visual AI tools – and verify they have adequate protective measures to prevent them from creating depictions of child sexual abuse.
"Ultimately about preventing abuse before it occurs," stated the minister for AI and online safety, noting: "Specialists, under strict protocols, can now identify the risk in AI systems early."
Addressing Regulatory Obstacles
The amendments have been introduced because it is illegal to produce and possess CSAM, meaning that AI creators and others cannot generate such images as part of a testing process. Until now, officials had to delay action until AI-generated CSAM was published online before dealing with it.
This law is designed to averting that issue by helping to stop the production of those images at source.
Legal Structure
The changes are being introduced by the government as revisions to the crime and policing bill, which is also implementing a ban on possessing, producing or distributing AI systems developed to generate child sexual abuse material.
Practical Impact
This recently, the minister visited the London headquarters of Childline and heard a simulated call to advisors involving a report of AI-based abuse. The interaction depicted a adolescent requesting help after facing extortion using a sexualised AI-generated image of himself, constructed using AI.
"When I hear about young people facing extortion online, it is a cause of intense frustration in me and justified concern amongst parents," he said.
Alarming Statistics
A leading internet monitoring organization reported that cases of AI-generated exploitation content – such as webpages that may include multiple images – had significantly increased so far this year.
Cases of the most severe content – the most serious form of abuse – increased from 2,621 visual files to 3,086.
- Girls were overwhelmingly victimized, making up 94% of illegal AI depictions in 2025
- Depictions of infants to two-year-olds increased from five in 2024 to 92 in 2025
Industry Reaction
The legislative amendment could "constitute a crucial step to ensure AI tools are safe before they are released," stated the chief executive of the internet monitoring organization.
"AI tools have made it so survivors can be victimised all over again with just a few clicks, providing offenders the capability to create potentially limitless amounts of advanced, lifelike child sexual abuse material," she added. "Content which additionally exploits survivors' suffering, and makes young people, particularly female children, more vulnerable on and off line."
Counseling Interaction Information
The children's helpline also published information of support interactions where AI has been mentioned. AI-related harms mentioned in the sessions include:
- Using AI to evaluate weight, physique and appearance
- AI assistants discouraging young people from talking to safe adults about abuse
- Facing harassment online with AI-generated material
- Online blackmail using AI-faked images
Between April and September this year, Childline delivered 367 support interactions where AI, conversational AI and associated topics were mentioned, four times as many as in the same period last year.
Fifty percent of the references of AI in the 2025 sessions were connected with psychological wellbeing and wellness, encompassing using chatbots for support and AI therapy apps.