AI-Enabled CSAM: Spot the Signs - How AI is Enhancing Grooming Methods

By Smoothwall
Published 19 December, 2024
3 minute read

Smoothwall’s parent company, Qoria, has released a first-of-its-kind report exploring the risks AI-enabled CSAM poses to students. A critical aspect of this issue is how perpetrators of child sexual abuse (CSA) are leveraging AI to customise their advances, refine deception methods and successfully manipulate their targets. 

This article sheds light on how predators are using AI to enhance grooming. The full report, which includes practical strategies schools can use to mitigate the risk of AI-enabled grooming and CSAM, can be found below.

Schools around the globe are concerned

Qoria surveyed over 600 schools from across the United States, UK, Australia and New Zealand. The vast majority expressed concerns about the potential for adult perpetrators to leverage AI to groom their students:

US 91.4%

UK 90.5%

Australia 91.7%

New Zealand 82.8%

Understanding how perpetrators are using AI to target children can assist schools in recognising the early warning signs and strengthen their response and prevention efforts.

Applying AI to the first 3 stages of grooming

1. Targeting the victim

Perpetrators often target children who appear vulnerable or lack supervision. This can include those with indicators of low self-esteem, mental health concerns, insecurity, or a lack of attachment to their families.

E2E (end-to-end encryption) services, including Snapchat, WhatsApp, Signal and Telegram are common platforms perpetrators use.

How is AI an enabler?

AI algorithms support perpetrators by analysing vast amounts of data to:

  • Identify vulnerable children with greater precision.
  • Detect patterns of behaviour, interests, sentiment and emotional states to use when approaching a child to discredit them or people close to them.

2. Gaining trust

Abusers pose as friends, celebrities or other people to pique the interest of children and ask questions to understand their home life and situation. They often offer gifts or rewards to gain the child's trust.

How is AI an enabler?

  • Abusers often use generative AI to create fake and convincing personas, and ensure they are tailored to appear realistic, relatable, and trustworthy.
  • AI can generate fake images or certifications, for example, to convince a child that someone is really who they say they are. 

3. Filling a need

Perpetrators often strive to fill a need in the child's life by providing emotional support, gifts, or rewards. This is designed to create a sense of dependence and loyalty to the perpetrator. They may also simultaneously discredit the child's support circle, parents or friends - compromising the child's relationships with others to solidify a close attachment between them and the child.

How is AI an enabler?

  • AI tools can be used to create fictitious information that discredits people close to the child.
  • Perpetrators also use AI to begin generating and sharing explicit or pornographic content with the child to desensitise them toward sexual content generally. This is an attempt to “normalise” further stages of the grooming process down the track.

There are several more stages to grooming, and students may exhibit noticeable behavioural changes as a result of this gradual process. Recognising the warning signs early can be essential in preventing escalation of potential harm. 

CSAM-guide-Thumb-2

Learn how to spot the signs of AI-enhanced grooming

Stay vigilant to the signs of grooming and gain strategies your school can implement now to mitigate the risk of AI-enabled CSAM by downloading Qoria’s must-read report: Addressing the Risks of AI-enabled CSAM and Explicit Content in Education

To discuss any of the issues raised in the report, or learn more about mitigation strategies, get in touch at enquiries@smoothwall.com. We’re ready to help.

Download now

Essential reads hand-picked for you...