AI technology is infiltrating every sector of our world - and education is no exception. The growing availability of generative AI tools presents schools, colleges and MATs with new possibilities, but also new challenges. In response, the Department for Education (DfE) guides settings to combine clear AI policies with effective digital safeguarding to manage the use of AI tools.
This article outlines key DfE guidelines on generative AI, and explores how the right web filtering is essential to support AI policies and protect students from the risks associated with AI tools.
What is generative AI?
Generative AI is a specific type of AI technology that is able to produce content in the form of text, image and video at rapid speed. Tools such as ChatGPT, Microsoft Copilot and Google Gemini are examples of generative AI chatbots, which create content in response to user prompts.
Chatbots are powered by large language models (LLMs) - systems that are trained on huge amounts of data to enable them to decipher and generate human-like content. They can be used for tasks such as:
- Problem-solving
- Content creation
- Summarising complex material
- Coding
- Research
The technology behind generative AI is constantly evolving, and new AI tools are emerging all the time.
Department for Education (DfE) guidance on generative AI
The DfE policy paper on generative AI puts the decision of whether to allow the use of AI tools in the hands of schools and colleges. If an education setting does approve the use of certain AI tools, it states that they must be applied in a safe, responsible and effective way.
An example of effective application could be teachers employing AI tools to complete administrative tasks - allowing them to focus more of their time on teaching.
AI policies
Before permitting the use of AI tools, schools and colleges are urged to establish clear policies around their application and management. This involves deciding which AI tools are allowed, and whether their use will be limited to staff or opened up to students as well.
AI policies should be informed by thorough risk assessments, taking into consideration:
- Potential safeguarding risks
- GDPR and data privacy laws
- Intellectual property implications
In settings that permit students to use generative AI, there must be “appropriate safeguards in place, such as close supervision and the use of tools with safety and filtering and monitoring features.”
Filtering can be used to permit or block access to specific AI tools, while monitoring allows education settings to identify the potential risks that can manifest when using these tools.
Further guidance on forming AI policies in line with Department for Education guidance can be found in Department for Education Guidance on Generative AI: A Summary for SLTs & DSLs.
How web filters manage generative AI content
Web filtering is a crucial component to ensure education settings adhere to DfE guidance on generative AI - but not all filters are built for the job.
What is web filtering?
Web filters enable schools to manage access to online content, helping to protect students from exposure to harmful or inappropriate material. There are different approaches to web filtering in schools:
DNS filters | Allow or restrict access to websites based on assessments of the domain name. |
URL filters | Allow or restrict access to web pages based on assessments of the full URL. |
Content-aware filters | Allow or restrict access to web pages based on assessments of the content, context and construction of the page. |
Real-time, content-aware filters | Perform all the actions of content-aware filters, but more thoroughly and in real time, at the point the page is requested. |
Schools should select a suitable filter through a process informed by risk assessments and the unique needs and preferences of the setting.
The challenge of filtering AI content
The instant nature of generative AI content requires web filters to work quickly and accurately in order to recognise its potential risks. This is a problem for DNS and URL filters. When it comes to ChatGPT, for example, a DNS filter will only be able to see the domain name. This leaves schools with the option to either block or allow the website - and nothing in-between.
URL filters are able to see the full address bar, which can be helpful with search engines like Google, where the user’s prompt will appear here, giving at least some indication of the type of results that may be returned. However, with ChatGPT and similar tools, what the user requests does not appear in the URL, leaving no indication for the filter of the type of content being sought.
Content-aware filters assess the actual content of the page being requested, meaning they are in a stronger position to identify the category of material produced by generative AI and allow or block it accordingly. However, if the filter is not able to do this in real time, there will be a delay while the details of the page are sent to the vendor’s back office for assessment.
During this time (which can range from 2-24 hours) the web page will either be allowed, putting students at potential risk, or blocked, which could impact teaching and learning.
The age of AI demands real-time filtering
Generative AI tools produce content dynamically, in real time. The filters most capable of recognising and managing this type of content work in the same way. Real-time, content-aware filters assess web content at the point of request. There is no delay period - as soon as harmful content goes live, access to it can be restricted.
The fact that only web filters with real-time capabilities can adequately address the risks posed by generative AI is acknowledged by regulators. The UK Safer Internet Centre’s (UK SIC) Appropriate Filtering guidelines explain that, to address the risks of AI-generated content, “Schools should understand the extent to which (...) content is dynamically analysed as it is streamed in real time to the user and blocked.” In the Filtering and Monitoring Standards, the DfE confirms that if a web filter cannot filter content in real time, it is considered a “technical limitation.”
It should be noted that the constantly changing nature of generative AI tools means that filtering alone cannot protect students from its associated risks. This includes some harmful or inappropriate content that may be produced by AI chatbots. Real-time filtering should be combined with risk assessments and robust digital monitoring to prioritise student safety.
Control access to AI tools with Smoothwall Filter
Smoothwall Filter is the only 100% real-time filter in UK education. Its dynamic, targeted approach offers advanced protection for students and facilitates effective enforcement of AI policies by managing access to generative AI tools.
- The “Unsafe AI Chat” category ensures that specific AI tools known to be dangerous are automatically blocked.
- The "AI Tools" category enables settings to quickly and easily block access to any AI tools not permitted for use by school policy.
- Content-aware filters provides the flexibility to allow or deny access to specified AI tools based on factors such as year group or time of day.
To learn more or book a short demo of Smoothwall Filter for your setting, contact enquiries@smoothwall.com. We're ready to help.

Explore the power of real-time web filtering
Learn more about the most advanced form of web filtering by downloading our free guide: A Complete Guide to Real-Time, Content-Aware Web Filtering.
Download now
Essential reads hand-picked for you...
- DSL Insights: What is DeepSeek and How Does it Pose a Risk to Students?
- Emerging Challenges in Digital Safeguarding: Machine Drift
- Web Filtering vs Digital Monitoring - What's the Difference?