10 Practical Strategies Educators Can Implement to Combat AI-Enabled CSAM

By Smoothwall
Published 10 April, 2025
5 minute read

It’s no secret that schools are navigating the rapid emergence of AI in education. In recent years, concerns have largely focused on ethics, curriculum integration, and the impact on student behaviour - particularly AI’s role in student academic dishonesty. While these are important issues, we suspected that an even greater challenge was on the horizon - one that directly affects student safety and wellbeing. Safeguarding, not just cheating, would be the emergent and defining AI issue for schools.  

In this article, we explore the urgent issue of AI-generated explicit content and child sexual abuse material (CSAM) in schools. Based on insights from hundreds of schools worldwide, we uncover the key risks, highlight the biggest challenges facing educators and provide 10 actionable strategies schools can implement now to protect students.

The Reality of AI-Generated Explicit Content in Schools

To put our concerns to the test, we surveyed over 600 schools across the globe to assess their awareness, challenges, and responses to explicit content and CSAM in educational settings. The findings were concerning. When it came to AI’s impact on explicit content and CSAM we found it isn’t a distant threat - it’s already here, adding immense pressure on already overwhelmed educators and school leaders. Many staff members admitted to being only partially aware of the tools used by perpetrators, while others were completely in the dark.

What’s Impacting Schools the Most?

The rapid pace at which AI technology evolves, combined with stretched resources, limited access to training, and the immense demands on educators’ time, is currently threatening to further increase the generation of CSAM in schools, colleges and MATs. Our data revealed that 65% of schools regularly manage incidents of students possessing, sharing, or requesting nude content. Alarmingly, children as young as eight (21%) and 11 (58%) were also found to be sharing explicit images online.


How Schools Can Take Action

In response to these concerns, it’s important that digital safety providers, educators, policymakers and safeguarding teams work together to implement meaningful change. Here are 10 practical steps schools can take to enhance safety and mitigate the risks associated with AI-enabled CSAM:

1. Establish AI-Specific Safeguarding Teams 

Create dedicated AI working parties within schools to coordinate policies, training, and response strategies while supporting staff affected by deepfake bullying and defamation.

2. Deploy AI-Powered Monitoring and Filtering Systems

Implement comprehensive digital monitoring and advanced filtering systems that can detect AI-generated risks, coded language and explicit content in real-time. Traditional ‘eyes and ears’ are no longer sufficient to
detect such risks in real-time.

3. Provide Continuous Professional Development

Provide regular professional development for staff on current digital threats, intervention strategies and use of monitoring tools. This training should go beyond basic awareness, covering topics like AI manipulation, digital
grooming tactics, and intervention strategies.

4. Educate Parents on AI Risks

Establish centralised online safety hubs and regular workshops to educate parents about AI risks beyond just screen time concerns, enabling them to protect children at home.

5. Introduce Student Wellbeing Check-Ins

Implement student wellbeing check-in tools that allow for frequent, real-time monitoring of students’ emotional and physical states. These tools can provide early indicators of distress, enabling staff to intervene proactively and offer support before issues escalate.

6. Update School Policies for AI-Related Incidents

Update school policies for AI-related incidents to reflect new risks. This includes clear steps and a harm minimisation approach to supporting victims of deepfake bullying and explicit content exposure, as well as guidance on how to respond to students who share or request inappropriate materials. 

7. Empower Parents with Advanced Parental Controls

Empower parents with parental control tools that go beyond screen time management. Schools should offer guidance on using parental control tools that can filter content, send alerts, monitor activity and block inappropriate material.

8. Introduce Digital Literacy Programs Early

Promote age-appropriate digital literacy programmes starting as early as age eight, covering topics such as safe online behaviour, identifying harmful content and reporting incidents.

9. Build Strong External Safeguarding Partnerships

Establish clear communication with external experts such as child psychologists, law enforcement and cyber safety professionals, to stay ahead of evolving threats. Start by hosting guest speakers or organising expert-led workshops for both staff and parents.

10. Implement Tailored Content Filtering Solutions

Implement comprehensive content filtering solutions tailored to the needs of different student groups, year levels and learning needs providing a safe but rich online learning experience for students and control and visibility for school staff.


Schools Don’t Have to Face This Alone

The rise of AI-enabled CSAM presents a serious challenge, but schools don’t have to face it alone. By adopting a multi-faceted approach that combines technology, training and community involvement, schools can significantly reduce the risks students face online.

At Smoothwall, we’re here to help you keep children safe and thriving in their digital lives. If you’re a school that would like to learn more about this issue, or share your challenges, reach out to us via this form and we’ll be in touch.  We look forward to supporting you and your school community.

CSAM-guide-Thumb-2

Empower Your School to Tackle AI-Enabled Explicit Content

Download your free copy of Qoria’s groundbreaking report: Addressing the Risks of AI-enabled CSAM and Explicit Content in Education

Download now

Essential reads hand-picked for you...