Safeguarding in the Age of AI: 
The Vital Role of Digital Monitoring

By Smoothwall
Published 27 June, 2025
5 minute read

As generative AI tools become increasingly accessible, schools, colleges and MATs are faced with a range of new safeguarding challenges. Web filters offer a solid first line of defence against these risks - but their scope only goes so far. To truly protect students from AI risks and address security and privacy issues, effective digital monitoring is required.

This article covers what generative AI is, and explores the vital role played by digital monitoring in protecting students and networks from its potential risks.  

What is generative AI?

Generative AI is the form of AI used in chatbots, which can create content in the form of text, image or video in seconds. The technology is powered by large language models (LLM) - machines trained on huge quantities of data to enable them to recognise patterns and thus produce human-like content. Tools such as ChatGPT, Microsoft Copilot and Google Gemini are examples of generative AI. 

AI chatbots are used to perform a wide range of tasks, including writing, research, problem-solving, planning and basic admin.

While generative AI can aid productivity and contribute to educational activities, it also poses risks, especially to students. As a result, the Department for Education (DfE) instructs that students “should only be using generative AI (...) with appropriate safeguards in place”, including “filtering and monitoring features.”

Before approving the use of AI technology, education settings are urged to put clear policies in place regarding its use. These should be informed by risk assessments which consider GDPR and data protection laws. Further guidance on how to establish AI policies in schools and colleges can be found in our summary of Department for Education guidance on generative AI


What is digital monitoring?

Digital monitoring refers to safeguarding solutions that identify students at potential risk through what they do, say or share on school-owned digital devices. Monitoring software runs in the background, so it does not interfere with teaching and learning. 

Monitoring systems identify potential risks by registering keystrokes and taking screenshots when threats are detected. Alerts are then created, which are sent to a designated staff member at the school (usually the DSL). 

For example, if a student types “how to build a bomb” into Google, the word “bomb” should trigger the monitoring system to act. How alerts are managed and communicated depends on the type of digital monitoring in place. 


How does digital monitoring address AI risks?

Web filters can help schools and colleges protect students from harmful AI content and block access to AI tools not permitted by school policies. However, they cannot manage what students do with AI and the risks revealed by these behaviours. 

Digital monitoring empowers education settings to address these gaps and identify students at potential risk early - often reducing the need for interventions later down the line. 

Discourage misuse of AI tools 


Web filters are not designed to view what students are inputting to generative AI tools. This creates a serious visibility gap for education settings, as students can utilise chatbots in inappropriate ways.

Students may craft prompts designed to help them cheat on tests, circumvent school safety measures or even have sexually explicit conversations with a chatbot. The latter is becoming increasingly common, as chatbots are seen as potential companions by some young people. This can lead to them forming synthetic relationships with AI chatbots that are inappropriate, addictive, and unhealthy. 

Students may also search for harmful generative AI tools such as nudifiers, which enable users to remove the clothing of people in images. In a school context these tools could facilitate the creation of child sexual abuse material (CSAM). 

Digital monitoring solutions can discourage misuse of generative AI by flagging incidents such as the use of sexual language or searches for terms like “nudifier”. Indeed, sometimes just the knowledge that digital devices are being monitored can deter network users from acting inappropriately.

Identify vulnerable students


The main role of digital monitoring is to enable DSLs to spot at-risk students early, so interventions can take place before incidents have a chance to escalate. 

Early signs of vulnerability may be revealed in what students type into AI chatbots. For example, there is a growing trend of young people using AI chatbots as therapists, because they feel more comfortable discussing their issues in online spaces than they do in person with adults. Of course, AI chatbots lack the skills and experience of trained therapists, so using them as a therapy replacement should not be encouraged. 

When these incidents do occur, digital monitoring can allow DSLs to gain a full contextual picture of a student’s interaction with a chatbot. This gives them an insight into the specific issues affecting a student, and enables them to take quick, informed action to meet the student’s needs. 

Detect AI risks with human-moderated digital monitoring


Smoothwall Monitor’s human-moderated digital monitoring enables schools and colleges to identify and manage risks revealed through the use of AI tools. 

As well as quickly alerting DSLs to threats to health or life, human moderators can detect concerning patterns in digital behaviours that may otherwise go unnoticed by busy staff. Moderators are trained experts, able to read between the lines and interpret even coded language that is used to hide inappropriate or harmful activities.

In terms of the specific challenges presented by AI, human-moderated digital monitoring helps education settings to:

  • Manage adherence to school policies concerning AI
  • Deter students and staff from misusing generative AI
  • Identify vulnerable students through their interactions with AI tools

If you have any questions about AI in education, or would like to book a short demo of Smoothwall Monitor for your setting, contact enquiries@smoothwall.com. We’re ready to help.

Learn more about human-moderated digital monitoring


Discover more about the most effective form of monitoring by downloading our Human-Moderated Digital Monitoring factsheet.

Human-Moderation-Factsheet

 

Essential reads hand-picked for you...