Making AI safer for students

A practical framework to help you Protect, Detect and Empower

AI is already part of students’ lives

Alongside opportunities, it brings new risks - risks that ordinary safeguarding was never designed to reach. SLTs, DSLs and ITs need a clear, practical way to respond.

Our short checklist can help.

Built around a simple Protect, Detect and Empower approach, it provides you with a tailored framework to strengthen safeguarding and make AI safer for your students.

The 3 key areas of AI safeguarding

Artboard 1-Feb-02-2026-02-13-16-4421-PM

Protect

How well are students being protected from harmful or inappropriate AI material?

Artboard 1 copy-Feb-02-2026-02-13-16-4134-PM

Detect

How effectively can risks linked to AI tools be identified and addressed early?

Artboard 1 copy 2

Empower

How well does your community understand AI, its risks, and how to use it safely and responsibly?

Safeguarding to combat the harmful effects of AI: A Best Practice Checklist

Is your safeguarding optimised to combat the risks from AI? Complete our checklist to see where you are on that journey, what you're doing right and areas you might want to look to develop.

1. A School, College, MAT Approach

1. Have you read the relevant materials from the DfE?
2. Do you have a specific AI platform or platforms that are used across your setting?
For example, CENTURY, ChatGPT, Google AI Tools (Gemini, Notebook LM, AI Studio), Learnt.ai, Olex.AI.
3. Do you know whether your staff understand how to use AI?
4. Have you set up an AI working party consisting of all stakeholders to support your strategy?
Stakeholders include the SLT, DSLs, IT teams, and governors.
5. If so, do you have regular check-ins to assess progress?
Please answer all questions before proceeding.

2. Policies & Risk Assessments

Policies

6. Do you have an AI Policy?
7. If so, has this been cross-referenced with other policies (Acceptable Use, Code of Conduct, Safeguarding, etc.)?

Risk Assessments

8. Have you assessed the risks of approved AI tools to understand how they may be used/misused?
9. If so, are further risk assessments carried out every time a new AI tool is introduced?
Please answer all questions before proceeding.

3. Technical Safeguards

Filtering

10. Have you applied your AI policy to your web filtering?
11. Does your web filtering currently block harmful AI content (including image and video) the moment it appears online?
12. Can you control or restrict access to AI chatbots by age group or role?
13. Do you have clear visibility and reporting on AI-related activity to support safeguarding decisions?

Monitoring

14. Does your setting currently use digital monitoring to spot students at risk?
18. Is your cloud storage regularly checked for inappropriate/harmful AI-generated content?
Please answer all questions before proceeding.

4. Education & Expectations

Staff Training & Expectations

19. Have all staff received training relating to the code of conduct and expectations of the use of AI?
20. Have staff received training on how to use AI tools effectively?
AI tools can include adaptive learning platforms such as CENTURY, generative writing assistants like ChatGPT, automated feedback systems such as Gradescope, and AI-powered planning tools like TeacherMatic.
21. Are all staff aware of your data protection policy and how this applies to the use of technology, including AI tools?

AI Literacy for Staff and Students

22. Do you have a clear assessment and homework policy that is communicated to students and staff regarding the use of AI tools?
23. Have you considered how you will provide age-appropriate AI education for students?
24. Do you know which tools you will use to talk to students about this topic?
25. Do you understand how these tools are going to be used, and the awareness staff and students will need in order to use them?
Please answer all questions before proceeding.

5. Community Engagement

26. Have you delivered information, advice and guidance for parents and carers about the safe use of AI for their children?
27. Do you regularly share updates with parents regarding all aspects of digital safeguarding including AI?
Please answer all questions before submitting.

Overall Readiness Status

Pending...
Weighted Analysis

Based on your responses, here's how well you're doing in your AI safeguarding journey.

Protect
Detect
Empower

How to improve your AI safeguarding with Smoothwall

Smoothwall Filter, Monitor and Online Safety Hub are designed from the ground up to address AI risk head-on. Book an informal, no-obligation demo to compare with your existing solutions and learn what's possible.

Book a demo today
Step 1 of 5

Supporting your next steps in AI safeguarding

Thousands of schools, colleges and MATs across the UK use Smoothwall solutions to support their approach to Protect, Detect and Empower.

Artboard 1-Feb-02-2026-02-13-16-4421-PM

PROTECT

Build a safe digital foundation by underpinning your AI policy with advanced web filtering.

Smoothwall Filter provides:

  • 100% real-time protection to block harmful AI-generated content the moment it appears.

  • Contextual controls that allow you to grant access to approved AI tools for specific users.

  • Protection beyond the browser, to prevent access to harmful AI apps installed on student devices.

fe-man-tablet
UK-Partners--2025-Back-to-School-character-1
Artboard 1 copy-Feb-02-2026-02-13-16-4134-PM

DETECT

Apply digital monitoring to identify students put at risk by their use of AI tools.

Smoothwall Monitor provides:

  • Enhanced visibility by analysing keystrokes to identify risk within AI chatbots and tools.
  • 24/7 support from trained human moderators who spot early warning signs and patterns others may miss.
  • Alerts delivered with context, helping safeguarding teams act early, proportionately and with confidence.
UK-Partners--2025-Back-to-School-character-1
Artboard 1 copy 2

EMPOWER

Support your whole community to use AI tools safely and responsibly.

Online Safety Hub provides:

  • Up-to-date content on the key topics and trends relating to AI.
  • Dedicated resources for staff, students, parents, governors and more.
  • Activities to promote meaningful conversations and actions around digital safety.
corner-FE-Tablet

Watch our live webinar

Keeping Students Safe in the Age of AI:
Our experts explain how you can Protect, Detect and Empower.

  • What keeping students safe means in the age of AI

  • The specific risks schools, colleges and MATs need to know

  • Proactive steps to protect students from AI, detect those most at risk from it and empower staff and students with the confidence to use AI safely.

Further resources

pixel
UK_SMW_2025_the-Age-of-AI_A-Field-Guide-v2-1 Download PDF
Whitepaper
How to Keep Students Safe in the Age of AI
Risks of AI enabled CSAM - Thumbnail Download PDF
Qoria Paper
Addressing the Risks of AI-enabled CSAM and Explicit Content in Education
UK_SMW_2026_AI_in_schools-Best_Practice_checklist-v2-1 Download PDF
Checklist
Safeguarding to Combat the Harmful Effects of AI: A Best Practice Checklist
pixel

Find out how to improve your AI safeguarding with Smoothwall

Smoothwall solutions are designed from the ground up to address AI risk head-on. Book an informal, no-obligation demo today.

Compare what you have at the moment to what's possible

Receive a live demo of your chosen solution and how it works to combat harmful effects of AI

Get answers to any questions you have and receive a no-obligation quote

Let's connect

Talk to usicon_webinar

Talk to an expert or book a demo. Our Digital Wellbeing experts are waiting to help.

Contact us

Stay in touchicon_newsletter

Sign up for our newsletter to get all the latest product information.

Subscribe