©️ Sophie Lewis | The Grooming Files

In the Internet Watch Foundation’s January to October reporting window, there were five reported cases of AI generated child sexual abuse material depicting infants aged 0 to 2 years old in 2024.

In the same period in 2025, there were ninety two.

Let that number sit with you for a moment.

Not five more. Not fifteen more. Ninety two. A 1,740% increase in one year.

While you were teaching your children about stranger danger, predators were downloading their photos from Facebook. While you were setting privacy settings on Instagram, AI tools were turning innocent images into abuse material. While you thought you were doing everything right, a threat you did not even know existed was targeting your child.

This is the AI generated child sexual abuse material crisis. And if you have never heard of it, you are not alone. Most parents have not. That is exactly what makes it so dangerous.


What AI Generated CSAM Actually Is

Let us be clear about what we are talking about. No euphemisms. No softening.

AI generated child sexual abuse material is exactly what it sounds like. Sexually explicit images or videos of children created using artificial intelligence. These are not cartoons. They are not artistic renderings. They are hyper realistic depictions of child abuse generated by AI tools, often using real children’s faces.

Here is how it works.

A predator finds a photo of your child. Maybe it is from your public Facebook profile. Maybe it is from your child’s school website showing the football team. Maybe it is from a church group photo you did not even know was online.

They download it.

Then they use freely accessible AI tools. Apps you can find with a simple Google search. Nudify apps. Deepfake generators. Tools that can take your child’s face and place it on abuse imagery. Tools that can generate entirely new explicit content using just one innocent photo as the base.

Within thirty seconds, they have downloaded your child’s image.

Within minutes, they have weaponised it.

No grooming required. No contact needed. No months long manipulation. Just your child’s photo and an AI tool.

This is what modern predators look like. And they are operating at a scale we are only beginning to understand.


The Scale No One Is Talking About

The numbers coming out of the Internet Watch Foundation’s 2025 report should terrify every parent in this country.

AI generated CSAM reports more than doubled year on year from 199 reports in 2024 to 426 in 2025 within the same January to October window.

Category A content, the most severe category depicting penetrative sexual activity, rose from 2,621 items to 3,086 items. That is now 56 percent of all illegal AI generated material, up from 41 percent the previous year.

Girls make up 94 percent of identified victims in AI generated imagery.

And those infants aged 0 to 2 years old. That surge from 5 to 92 cases represents the fastest growing demographic being targeted.

But here is what keeps me up at night. These are only the cases we know about.

The Internet Watch Foundation conducted a focused investigation on one dark web forum over one month. They found 20,254 AI generated images posted. After detailed assessment, 2,978 were confirmed as child sexual abuse material.

One forum. One month. Nearly 3,000 images.

The United States National Center for Missing and Exploited Children CyberTipline has received thousands of reports involving generative AI, and expects those numbers to continue rising.

And none of this accounts for images never reported. Images traded in private channels. Images created and kept for personal use. Images that never surface until a predator is caught for something else entirely.

We are seeing the tip of an iceberg. And the water is rising fast.


The Victimless Crime Lie

Some will try to tell you AI generated CSAM is a victimless crime because no child was directly abused to create it.

Let me be crystal clear. That is predator logic dressed up as legal theory.

Here is why AI generated CSAM is absolutely not victimless.

It normalises abuse. Every image generated reinforces the idea that children are sexual objects. It feeds the demand. It emboldens predators who might otherwise hesitate.

It is used for grooming. Predators show AI generated images to real children to normalise sexual content. Look, other kids do this. This is what I want from you. The images become tools for manipulation.

It can depict real children. Your child’s face taken from an innocent photo placed on explicit imagery. When they discover it, and some do, the psychological harm is devastating. Their body was not touched, but their image was violated.

It perpetuates the market. As long as content is being created and shared, there is a network sustaining itself. AI generated material does not replace traditional CSAM. It supplements it. It grows the ecosystem.

It trains predators. Offenders do not exist in isolation. They share techniques. They refine methods. AI generated content provides practice for those who may escalate to contact offending.

And here is the part that should horrify everyone. Many AI generated images are not entirely synthetic. They are manipulations of real children’s photographs. Your child’s actual face on fabricated abuse.

That is not victimless. That is your child being exploited without ever being touched.


How They Are Getting Away With It

You want to know why this is exploding. Because the infrastructure to stop it does not exist.

Training datasets contain abuse material. Researchers connected to Stanford Internet Observatory identified thousands of suspected child sexual abuse images, with over a thousand confirmed, within LAION 5B. A dataset used to train Stable Diffusion and other major AI models. These systems were trained on vast scraped datasets that included illegal content. Which means they are capable of reproducing it.

The tools are accessible. Not hidden on the dark web. Not requiring technical expertise. Apps on mainstream platforms. Tutorials on YouTube and TikTok. Nudify tools marketed as harmless fun but weaponised for abuse.

International operations are nearly impossible to track. Predators in one country. Servers in another. Victims’ images from a third. Law enforcement jurisdictions do not overlap. Coordination is slow. By the time one country acts, the operation has moved.

Platforms profit from the tools. Some of the apps used to create this material are monetised. Subscriptions. Premium features. Advertising revenue. Companies are making money from tools that enable abuse and claiming they cannot control how users deploy them.

Let me give you real examples of how this plays out.

A child psychiatrist in the United States was convicted of using web based tools to create nude images of children he knew for his own sexual gratification. He did not need to touch them. He did not need to groom them. He just needed their photos.

A former school employee was charged with using AI to create child sexual abuse material of children under his care. He used photographs he had taken himself or obtained from parents. Photos that were innocent in context until they were not.

These are not isolated incidents. These are patterns. And they are accelerating.


When Kids Become Creators

Now add this nightmare layer. Children are creating AI generated abuse material of their peers.

A report from Thorn found that 1 in 10 minors report knowing cases where friends or classmates created synthetic deepfake nude images of other children using generative AI tools.

Read that again. One in ten.

Children do not understand they are creating child sexual abuse material. They think it is a prank. A joke. A way to humiliate someone they do not like. They do not grasp that what they have created is a criminal offence that could follow them for life.

But here is the system failure. We are criminalising children who do not understand the technology whilst letting platforms profit from the tools.

A thirteen year old uses a nudify app to create a fake image of a classmate. That child is now technically in possession of CSAM. They could be prosecuted. Registered. Marked for life.

Meanwhile, the app that enabled it is still available for download. Still monetised. Still marketed.

Schools are completely unprepared for this. Teachers do not know how to respond when a student reports being deepfaked by a classmate. Policies do not exist. Procedures are not written. And by the time anyone figures out what to do, the image has already been shared across school networks.

This is a safeguarding crisis happening in real time. And we are flying blind.


What the UK Is Doing and Why It Is Not Enough

In February 2025, the UK became the first country in the world to announce legislation specifically targeting AI generated child sexual abuse material.

The measures include the following.

Making it illegal to possess, create or distribute AI tools designed to generate CSAM with sentences of up to five years in prison.

Criminalising AI generated paedophile manuals punishable by up to three years.

Allowing trusted organisations such as the Internet Watch Foundation to test AI models for their ability to generate abuse material before public release.

Giving Border Force officers powers to compel individuals to unlock digital devices at UK entry points when CSAM offences are suspected.

This is progress. Real and tangible progress. And the UK should be commended for leading where others have hesitated.

But let us be honest about the gaps. Because they are massive.

The legislation does not address CSAM embedded in training datasets. If AI models are trained on abuse material, they will continue generating it. We are treating the symptom, not the disease.

Enforcement is unclear. How do you prove a tool was designed for abuse when the same technology has legitimate uses. How do you track international developers. How do you stop tools hosted outside UK jurisdiction.

Platform accountability is weak. Apps hosting nudify tools face minimal consequence. Social platforms where images are shared self regulate. The companies profiting from abuse enabling ecosystems are not being meaningfully held to account.

International coordination is lacking. AI generated CSAM is borderless. Our response is not.

The UK has taken the first step. But this requires global action. And we are nowhere near that yet.


What Parents Need to Know Now

This is the part where I am supposed to give you a checklist and tell you everything will be fine if you follow it.

I am not going to do that.

Because the truth is uncomfortable. You cannot fully prevent this as an individual parent.

Every public photo is vulnerable. School websites. Sports team pages. Church group photos. Images shared by other parents. Your child exists in a digital ecosystem you do not fully control.

But you are not powerless.

What you can do.

Limit public photos of your children and understand that privacy settings do not prevent screenshots or redistribution.

Check where your child’s image appears online including school sites clubs and organisations.

Push schools and groups to adopt strict photo consent policies.

Teach children that any image can be manipulated not just sexual images.

Know how to report and remove content quickly.

If your child is affected.

Report Remove via Childline and the Internet Watch Foundation.

Take It Down run by the National Center for Missing and Exploited Children.

Report directly to the platform involved.

This is not about parent blaming.

This is about systems that built powerful tools without safeguards and left families holding the risk.


What Must Change

We cannot rely on individual parents to solve a systemic crisis. This requires structural change. Policy change. A complete rethinking of how we regulate AI technology when it intersects with child safety.

Here is what must happen.

First. Ban tools designed for abuse not just criminalise possession.

Criminalising users is reactive. If a tool’s primary or common use is creating abuse material, it should not exist. Developers should face consequences for creating these tools, not just users for deploying them.

Second. Clean training datasets before models are released.

Every major AI model should be required to prove its training data is free of CSAM before public release. If researchers can find illegal abuse material inside a dataset like LAION 5B, developers can too. They chose not to look. Make them look.

Third. Hold platforms accountable for hosting abuse enabling tools.

Apps that host nudify tools should face the same liability as platforms that host CSAM itself.

Fourth. Build international coordination mechanisms.

This is a borderless crime. The response must be borderless too. We need treaties shared databases and coordinated enforcement.

Fifth. Invest in AI detection tools for law enforcement.

If predators are using AI to create abuse material, law enforcement must be equipped to detect it.

Sixth. Protect children who create through diversion not criminalisation.

A thirteen year old who creates a deepfake of a classmate has done something harmful. But they are not a predator. They are a child who does not understand the technology.


The Line We Draw

We are in a crisis that most parents do not even know exists.

While we are teaching stranger danger, predators are downloading our children’s faces from Facebook and weaponising them in minutes.

While we are setting privacy settings, AI tools are turning innocent photos into abuse material faster than law enforcement can track.

While we are doing everything we have been told keeps children safe, the threat has evolved and our protections have not kept pace.

This is not a future threat. This is happening now.

To children whose parents did everything right.

To infants whose photos were posted by proud grandparents.

To teenagers whose classmates thought deepfaking them would be funny.

To children whose faces were harvested from school websites they did not even know existed.

The question is not whether we can stop all of it. We cannot. Not as individual parents.

The question is what are we willing to do to try.

Because if we are not willing to adapt, predators already have.

They are downloading your child’s photo right now.

And within minutes, they will have weaponised it.


Sophie Lewis
Survivor. Journalist. Truth teller.
The Grooming Files
www.thegroomingfiles.com


If you need support

UK NSPCC Helpline 0808 800 5000
Report Remove via Childline
Internet Watch Foundation www.iwf.org.uk
National Center for Missing and Exploited Children www.missingkids.org


Published January 2026


Categories:

Leave a comment