British Technology Firms and Child Safety Officials to Examine AI's Ability to Create Abuse Content
Technology companies and child safety agencies will receive authority to evaluate whether artificial intelligence systems can produce child abuse images under recently introduced British laws.
Significant Increase in AI-Generated Illegal Material
The announcement coincided with revelations from a protection monitoring body showing that cases of AI-generated child sexual abuse material have more than doubled in the past year, growing from 199 in 2024 to 426 in 2025.
Updated Regulatory Structure
Under the changes, the authorities will allow designated AI developers and child protection organizations to inspect AI models β the foundational systems for conversational AI and visual AI tools β and verify they have sufficient protective measures to stop them from creating depictions of child sexual abuse.
"Ultimately about preventing abuse before it happens," declared the minister for AI and online safety, adding: "Specialists, under strict protocols, can now identify the danger in AI systems early."
Addressing Legal Challenges
The amendments have been implemented because it is against the law to create and possess CSAM, meaning that AI developers and other parties cannot generate such content as part of a testing process. Previously, authorities had to wait until AI-generated CSAM was uploaded online before addressing it.
This law is designed to averting that issue by helping to halt the creation of those images at source.
Legal Structure
The changes are being added by the authorities as modifications to the crime and policing bill, which is also implementing a prohibition on owning, producing or sharing AI systems designed to create child sexual abuse material.
Practical Impact
This recently, the minister toured the London base of a children's helpline and listened to a simulated call to advisors involving a account of AI-based exploitation. The call portrayed a adolescent requesting help after facing extortion using a explicit deepfake of himself, created using AI.
"When I hear about young people facing extortion online, it is a cause of extreme frustration in me and justified concern amongst families," he said.
Alarming Data
A leading internet monitoring foundation reported that cases of AI-generated exploitation material β such as online pages that may include numerous files β had significantly increased so far this year.
Cases of the most severe material β the most serious form of abuse β increased from 2,621 visual files to 3,086.
- Girls were overwhelmingly targeted, accounting for 94% of prohibited AI images in 2025
- Portrayals of newborns to toddlers increased from five in 2024 to 92 in 2025
Industry Reaction
The law change could "represent a vital step to guarantee AI products are safe before they are released," commented the head of the internet monitoring organization.
"AI tools have enabled so victims can be targeted all over again with just a few clicks, giving criminals the ability to make potentially limitless amounts of advanced, photorealistic exploitative content," she continued. "Content which further commodifies survivors' trauma, and makes young people, especially girls, more vulnerable on and off line."
Counseling Interaction Information
The children's helpline also released details of counselling sessions where AI has been mentioned. AI-related risks discussed in the conversations include:
- Using AI to rate weight, physique and looks
- AI assistants discouraging children from consulting safe guardians about harm
- Facing harassment online with AI-generated content
- Online blackmail using AI-manipulated pictures
During April and September this year, the helpline delivered 367 counselling sessions where AI, chatbots and associated topics were mentioned, significantly more as many as in the equivalent timeframe last year.
Half of the references of AI in the 2025 sessions were related to psychological wellbeing and wellbeing, including using AI assistants for support and AI therapy applications.