While web filtering helps education settings to control access to AI tools and block harmful content, it doesn’t reveal how students are interacting with AI or what those interactions might tell us about their overall mental health and wellbeing.
For example, could a student be engaging in inappropriate or potentially dangerous conversations with a chatbot?
That’s where digital monitoring plays a critical role.
Digital monitoring refers to safeguarding solutions that identify students at potential risk through what they do, say or share on school-owned digital devices. These tools run silently in the background, so they don’t disrupt teaching or learning.
Monitoring systems identify potential risks by registering keystrokes and taking screenshots when threats are detected. Alerts are then created, which are sent to a designated staff member at the school (usually the DSL).
For example, if a student types “how to build a bomb” into an AI chatbot, the word “bomb” should trigger the monitoring system to act. How alerts are managed and communicated depends on the type of digital monitoring in place.
Filtering can block access to dangerous AI tools - but it can't show how students use AI, what prompts they write/input, or what those interactions reveal.
Digital monitoring fills this gap in two critical ways:
AI-related risks can be subtle and context-dependent. That’s why many education settings are now turning to human-moderated digital monitoring to enhance their safeguarding response.
Unlike exclusively automated systems, human moderation brings expert insight - enabling more accurate interpretation of language, tone and intent. Moderators can assess alerts in real time, spot patterns over time, and pick up on coded or concealed language that might otherwise go unnoticed.
When applied to AI use in education, human-moderated monitoring can help:
Our human moderators shed light on the AI risks students can be exposed to.
Tools that use human-moderated digital monitoring - such as Smoothwall Monitor - are helping schools, colleges and MATs respond more effectively to the risks AI introduces. Not just at the point of access, but through the ongoing patterns and behaviours that follow. Learn more here |