Key points:
- AI-generated child sexual abuse material (CSAM) is increasing online, particularly on dark web forums.
- The Internet Watch Foundation (IWF) found a 17% increase in AI-generated CSAM images over a 6-month period.
- Most AI-generated CSAM involves manipulating existing abuse footage or adult pornography rather than fully synthetic videos.
- Experts warn that as AI technology improves, the volume and realism of such content could increase.
- This trend poses challenges for law enforcement and tech companies in detecting and preventing such material.
The proliferation of artificial intelligence technology has led to an alarming increase in AI-generated child sexual abuse material (CSAM) online, according to a recent report by the Internet Watch Foundation (IWF).
This disturbing trend highlights the dark side of AI advancements and poses significant challenges for law enforcement, tech companies, and child protection advocates.
In a 30-day review of a dark web forum, the IWF discovered 3,512 AI-generated CSAM images and videos, marking a 17% increase from a similar review conducted just six months earlier.
Dan Sexton, chief technology officer at the IWF, noted a concerning trend: “Realism is improving. Severity is improving. It’s a trend that we wouldn’t want to see.”
Most of the AI-generated content involves manipulating existing CSAM or adult pornography, often by superimposing a child’s face onto the footage. This practice, known as “deepfaking,” has become increasingly sophisticated and accessible due to freely available AI models and tools.
“Some of these are victims that were abused decades ago. They’re grown-up survivors now,”
Sexton explained, highlighting how this technology can perpetuate harm to abuse survivors by giving new life to footage of their past trauma.
While fully synthetic AI-generated videos remain relatively uncommon and unrealistic, experts warn that this could change rapidly as the technology improves. The potential for more realistic, entirely fabricated CSAM videos raises serious concerns about the future landscape of online child exploitation.
The rise in AI-generated CSAM presents significant challenges for those working to combat online child exploitation. David Finkelhor, director of the University of New Hampshire’s Crimes Against Children Research Center, points out that deepfaked material may evade current detection methods: “Once these images have been altered, it becomes more difficult to block them.”
Legal experts are grappling with how to address this new form of CSAM. While possessing such imagery, regardless of its AI-generated nature, remains illegal in the US, prosecuting creators may become more complex.
Paul Bleakley, an assistant professor of criminal justice at the University of New Haven, notes:
“It is still a very gray area whether or not the person who is inputting the prompt is actually creating the CSAM.”
In response to these challenges, child safety advocates are pushing for legislative changes. In the UK, there are efforts to criminalize the creation of guides for generating AI-made CSAM and the development of AI models capable of producing such material