New Law to Tackle AI-Child Abuse Images at Source as Reports More Than Double
Strengthening Safeguarding in the Age of AI
On 12 November 2025, the UK Government announced new legislation marking a major step forward in protecting children from online harm. This decisive measure takes a stronger stance against the misuse of artificial intelligence (AI) to create synthetic child sexual abuse material (CSAM), a deeply concerning and fast-growing threat.
According to the Internet Watch Foundation (IWF), reports of AI-generated child abuse imagery have more than doubled in the past year, rising from 199 in 2024 to 426 in 2025 (January to October). Even more disturbing is the sharp increase in AI-generated depictions of infants aged 0–2, which rose from 5 to 92 in the same period.
This legislation, introduced as an amendment to the Crime and Policing Bill, ensures the government can work directly with the AI industry and child protection organisations to prevent AI models from being misused to create or spread illegal content.
A proactive, prevention-first approach
Previously, testing AI models for safety was extremely limited as creating or holding synthetic abuse material, even for safety testing, carries criminal liability. This meant harmful content could only be removed after it appeared online. The new law changes that. It empowers designated AI developers and trusted child-safety organisations such as the IWF to act as authorised testers, able to scrutinise AI systems before release. Their role will be to identify and fix weaknesses, ensuring safeguards are in place to prevent AI from producing or amplifying CSAM, indecent images, or other illegal content. This represents one of the first measures of its kind in the world, designed to make sure AI systems are tested robustly and responsibly reducing the risk of harmful outputs from the very start.
Building in safety, not adding it later
Under the new law:
AI misuse will be tackled at the source- By testing models before deployment, developers can prevent the generation of synthetic abuse images early on.
Authorised organisations- will be able to test for vulnerabilities related to child abuse, extreme pornography, and non-consensual intimate images.
Developers- will be encouraged and expected to make child protection a core design principle, not an optional safeguard.
An expert advisory group- will be established to support the safe and secure implementation of testing, including measures to protect sensitive data and safeguard the wellbeing of researchers working in this emotionally demanding field.
A growing and evolving threat
The IWF’s data underscores the urgency of this step. Not only have reports of synthetic CSAM doubled, but the severity of material has worsened. “Category A” content involving penetration, bestiality, or sadism rose from 2,621 to 3,086 items in the past year, now accounting for 56% of all illegal material (up from 41%).
Girls remain disproportionately targeted, representing 94% of illegal AI-generated content in 2025.
The government acknowledges that offenders are increasingly attempting to use AI systems to create indecent deepfakes or manipulate images of real children, both those known to them and those found online often seeking ways to circumvent safety mechanisms. This new law aims to make such actions significantly harder by ensuring that developers can identify and close off these vulnerabilities before systems reach the public.
Collaboration for a safer digital future
This measure reinforces the UK’s commitment to working together with AI developers, technology platforms, and child protection organisations. It promotes an integrated approach to innovation and safeguarding, one where technological progress and child safety go hand in hand.
At RLB Safeguarding Ltd, we strongly welcome this collaborative, prevention-focused strategy. It reflects a shared commitment across government, industry, and the safeguarding community to ensure that AI is developed responsibly, ethically, and with children’s safety at its heart.
We are particularly encouraged by the focus on:
Proactive prevention- stopping harm before it occurs.
Transparency and accountability- empowering trusted organisations to test and verify safeguards.
Wellbeing and support- ensuring those involved in testing are properly protected.
Public trust- reinforcing confidence that AI technologies can be both safe and socially responsible.
Contact us for more information about our online safety and AI Safeguarding training here
Follow us on X for live updates on new guidance and news.