AI generated images of children being subjected to sexual abuse threatens to “overwhelm” internet
Thousands of AI generated images depicting children, some under two years old, being subjected to the worst kinds of sexual abuse have been discovered, amid warnings abuse of the technology now threatens to “overwhelm” the internet.
New data published today (October 25) by the Internet Watch Foundation (IWF) shows most AI child sexual abuse imagery identified by IWF analysts is now realistic enough to be treated as real imagery under UK law.
The IWF says the most convincing imagery would even be difficult for trained analysts to distinguish from actual photographs, and warns text-to-image technology will only get better and pose more obstacles for the IWF and law enforcement agencies.
The IWF, which is the UK organisation responsible for detecting and removing child sexual abuse imagery from the internet, said its “worst nightmares” have come true as criminals are using AI to generate new imagery of real victims of sexual abuse.
These are real children who have appeared in confirmed sexual abuse imagery, whose faces and bodies have been built into AI models designed to reproduce new imagery of these children.
Criminals are also using AI technology to create imagery of celebrities who have been “de aged” and depicted as children in sexual abuse scenarios.
The IWF also warns technology is being abused to “nudify” children whose clothed images have been uploaded online for legitimate reasons. Analysts have also seen evidence this content is being commercialised.
The study focused on a single dark web forum dedicated to child sexual abuse imagery.
In a single month:
- The IWF investigated 11,108 AI images which had been shared on a dark web child abuse forum.
- Of these, 2,978 were confirmed as images which breach UK law – meaning they depicted child sexual abuse.
- Of these images, 2,562 were so realistic, the law would need to treat them the same as if they had been real abuse images*.
- More than one in five of these images (564) were classified as Category A, the most serious kind of imagery which can depict rape, sexual torture, and bestiality.
- More than half (1,372) of these images depicted primary school-aged children (seven to 10 years old).
- As well as this, 143 images depicted children aged three to six, while two images depicted babies (under two years old).
In June, when the IWF first sounded the alarm on AI imagery, the Foundation confirmed it had discovered seven URLs containing AI-generated child sexual abuse imagery on the open web.
Susie Hargreaves OBE, Chief Executive of the IWF (pictured above) called for international collaboration on the threats posed by AI.
She said: “Our worst nightmares have come true. Earlier this year, we warned AI imagery could soon become indistinguishable from real pictures of children suffering sexual abuse, and that we could start to see this imagery proliferating in much greater numbers. We have now passed that point.”
“Chillingly, we are seeing criminals deliberately training their AI on real victims’ images who have already suffered abuse. Children who have been raped in the past are now being incorporated into new scenarios because someone, somewhere, wants to see it.”
“As if it is not enough for victims to know their abuse may be being shared in some dark corner of the internet, now they risk being confronted with new images, of themselves being abused in new and horrendous ways not previously imagined.”
“This is not a hypothetical situation. We’re seeing this happening now. We’re seeing the numbers rise, and we have seen the sophistication and realism of this imagery reach new levels.”
“International collaboration is vital. It is an urgent problem which needs action now. If we don’t get a grip on this threat, this material threatens to overwhelm the internet.”
The IWF fears a deluge of life-like AI child sexual abuse material could distract resources from detecting and removing real abuse.
In some instances, opportunities to identify and safeguard real children could be missed if analysts’ time is spent investigating thousands of images of artificial children.
Ian Critchley, National Police Chiefs’ Council Lead for Child Protection, said: “In the last five years the volume of online child sexual abuse offending has rapidly increased, with new methods and ways of offending being discovered on a regular basis.
“As police lead I have been working with the IWF – a world leader in this area – together with partners, and law enforcement colleagues, to understand the impact of what we have been calling ‘the emerging threat’ of Artificial Intelligence.
“It is clear that this is no longer an emerging threat – it is here, and now. We are seeing an impact on our dedicated victim identification officers, who seek to identify each and every real child that we find in this abhorrent material. We are seeing children groomed, we are seeing perpetrators make their own imagery to their own specifications, we are seeing the production of AI imagery for commercial gain – all of which normalises the rape and abuse of real children.
“AI has many positive attributes, and we are developing opportunities with partners like the IWF, Government and industry to turn this technology against those who would abuse it to prey on children.
“Together we continue to work at pace to ensure that industry prevents these appalling images being created, shared and distributed on their platforms and that we identify and bring to justice the abhorrent offenders who seek to abuse children. It is also why the Online Safety Act is the most important piece of legislation in many years; to ensure the safety of all children from abusive and harmful material, an increasing number of which is AI generated.”
The IWF has said increased availability of this imagery also poses a real risk to the public and serves to normalise sexual violence against children.
Their analysts have discovered online manuals dedicated to helping criminals fine tune AI image generators to produce more realistic imagery.
Now, with criminals using real children as models for AI image generation, analysts say new imagery can be created at the click of a button.
Spotted something? Got a story? Send a Facebook Message | A direct message on Twitter | Email: [email protected] Latest News