Fortifying Your Campsite...
How to Filter Harmful AI Content Online


As part of enforcing AI policies and safeguarding students, schools, colleges and MATs must consider the role of web filtering in keeping both staff and students safe from harmful AI-generated content.

The role of filtering in the Age of AI

 

Web filters play an essential role in helping schools manage the online risks associated with AI-generated content, offering control, visibility and protection.

An effective filter should be able to:

  • Block access to harmful websites
    This includes sites created or manipulated by AI to spread misinformation, deepfakes or inappropriate content.

  • Prevent access to high-risk AI platforms
    Especially those that are unmoderated or deemed unsuitable for use in education settings.

  • Tailor access to AI tools
    Set rules based on year group, user role or time of day to ensure appropriate access for different users.

  • Block unsafe or inappropriate search terms
    Reducing the likelihood of students encountering harmful AI-generated material via search engines.

Filtering technologies at a glance

Not all web filters handle AI content equally. Understanding the differences is crucial to choosing the right protection:

DNS Filters

Filter based only on domain names (e.g. chat.openai.com). They offer no visibility of the content being accessed,
making decisions binary: block or allow.

URL Filters

Allow or restrict access to web pages based on assessments of the full URL. Useful for some search engines, but ineffective with AI tools like ChatGPT, where prompts and responses don’t appear in the URL.

Content-aware filters

Analyse a web page’s content to decide if it’s safe - but not at the point the page is accessed by an individual.
Instead, decisions may be based on older versions of a web page, sometimes from days or weeks ago.
With AI content changing rapidly, this delay could mean harmful material is seen before the filter catches up.

Real-time, content-aware filters

Perform all the actions of content-aware filters, but more thoroughly and in real time, at the point the page is requested, ensuring that harmful AI generated content is seen and blocked much more quickly.


Education settings should select a suitable filter through a process informed by risk assessments and the unique needs and preferences of their school, college or MAT. 


 


Why 100% real-time filtering is essential for AI

 

With AI, content is unpredictable, often unmoderated and appears online at speed. Without real-time analysis, harmful content can be missed - or blocked too late.

Real-time, content-aware filtering ensures:

  • Harmful content can be blocked the moment it appears
  • Teaching and learning can continue uninterrupted
  • Education settings stay aligned with UK filtering standards and safeguarding expectations
As the UK Safer Internet Centre (UKSIC) highlights, schools, colleges and MATs must understand how their filters handle “dynamically analysed” content in real time.
filter-pyramid-qoria-fullcolour

The Smoothwall Digital Safety Pyramid



Aligning your AI policy with your filtering

 

Your web filter should directly support your AI policy and help you manage access to AI tools safely and appropriately.

Consider the extent to which your filter can:

  • Control which AI tools can be used
    Block or allow access to specific tools depending on your setting’s rules.

  • Block high-risk AI platforms automatically
    Use pre-set categories to prevent access to known unsafe or inappropriate tools.


  • Set access rules based on who is using it and when
    For example, allow access for staff but not students, or restrict use during certain times of day.

These features make it easier to enforce your AI policy in a fair, consistent and effective way - without disrupting learning.

Smoothwall Filter brings together all of these capabilities - including 100% real-time, content-aware filtering - as standard. It empowers schools, colleges and MATs to enforce their AI policies with confidence, respond to emerging risks instantly, and keep their students and staff safe online.

Learn more here
Filter laptop

 


Web filtering plays a vital role, but it’s only one part of the picture

While web filtering can go a long way in helping to protect your staff and students against potentially harmful AI content, it’s important to note: no filter - regardless of how advanced - can eliminate every risk posed by AI tools. 

Filtering therefore must be part of a broader strategy that includes:

  • Clear AI usage policies
  • Risk assessments for new technologies
  • Staff and student training
  • Digital monitoring for online behaviours (more on this in the next article)
Survey-person-2