Watching The Trail...
How Digital Monitoring Spots Students at Risk from AI


While web filtering helps education settings to control access to AI tools and block harmful content, it doesn’t reveal how students are interacting with AI or what those interactions might tell us about their overall mental health and wellbeing.

For example, could a student be engaging in inappropriate or potentially dangerous conversations with a chatbot?
That’s where digital monitoring plays a critical role.

What is digital monitoring?

 

Digital monitoring refers to safeguarding solutions that identify students at potential risk through what they do, say or share on school-owned digital devices. These tools run silently in the background, so they don’t disrupt teaching or learning.

Monitoring systems identify potential risks by registering keystrokes and taking screenshots when threats are detected. Alerts are then created, which are sent to a designated staff member at the school (usually the DSL). 

For example, if a student types “how to build a bomb” into an AI chatbot, the word “bomb” should trigger the monitoring system to act. How alerts are managed and communicated depends on the type of digital monitoring in place. 



The role of digital monitoring in addressing AI risks

Filtering can block access to dangerous AI tools - but it can't show how students use AI, what prompts they write/input, or what those interactions reveal.

Digital monitoring fills this gap in two critical ways:

 

By spotting signs of AI misuse
Students may attempt to use generative AI to cheat, bypass filters, or engage in inappropriate conversations. Some even form unhealthy, synthetic relationships with AI tools - using them as substitutes for companionship or emotional support.

There’s also a growing concern around harmful AI tools like nudifiers, which digitally remove clothing from images. In a school context, this poses serious safeguarding implications, including the potential creation of child sexual abuse material (CSAM).

Digital monitoring solutions can discourage misuse of generative AI by flagging incidents such as the use of sexual language, queries linked to safeguarding issues such as eating disorders, or searches for terms like “nudifier”. Indeed, sometimes just the knowledge that digital devices are being monitored can deter network users from acting inappropriately.
By identifying vulnerable students using AI tools
The main role of digital monitoring is to enable DSLs to spot at-risk students early, so interventions can take place before incidents have a chance to escalate. 

Early signs of vulnerability may be revealed in what students type into AI chatbots. Some students use AI tools to open up about personal struggles they may not feel comfortable sharing with adults. For instance, there is a growing trend of young people using AI as a form of therapy. While this may feel safe to the student, it can lead to greater risk as AI chatbots lack the qualifications and expertise to provide appropriate, professional support.

Digital monitoring gives safeguarding staff visibility into these interactions - helping them understand the context and respond appropriately. This can lead to quicker, more informed interventions tailored to the student’s needs.

 


Detecting AI risks with the help of
human-moderation

AI-related risks can be subtle and context-dependent. That’s why many education settings are now turning to human-moderated digital monitoring to enhance their safeguarding response.

Unlike exclusively automated systems, human moderation brings expert insight - enabling more accurate interpretation of language, tone and intent. Moderators can assess alerts in real time, spot patterns over time, and pick up on coded or concealed language that might otherwise go unnoticed.


When applied to AI use in education, human-moderated monitoring can help:

  • Enforce AI policies with greater visibility
    Provide schools with a clearer picture of how AI tools are being used across the network.

  • Deter unsafe or inappropriate AI use
    Monitoring can act as a deterrent—reducing incidents of misuse by making expectations and oversight clear.

  • Spot students at risk through their digital behaviours
    Identify early signs of distress, unsafe searches or emotionally charged interactions with AI tools.

  • Enable faster, more informed safeguarding decisions
    Offer DSLs detailed, contextual information that supports timely and effective intervention.

 

Our human moderators shed light on the AI risks students can be exposed to.

Tools that use human-moderated digital monitoring - such as Smoothwall Monitor - are helping schools, colleges and MATs respond more effectively to the risks AI introduces. Not just at the point of access, but through the ongoing patterns and behaviours that follow.

Learn more here
smoothwall-monitor-ss-2-1-1