The UK Safer Internet Centre (UK SIC) has released its updated definitions of appropriate filtering and monitoring for education settings. These guidelines are designed to help schools and providers understand “in conjunction with their completed risk assessment, what should be considered as ‘appropriate’ filtering and monitoring.” This article provides a summary of all key changes for 2025.
Background
The UK SIC has been publishing its filtering and monitoring definitions since 2016. They are a valuable resource for education settings, providing guidelines on how schools and colleges can maintain effective protection for students against online risks.
The definitions are cited by multiple Department for Education (DfE) documents, including Keeping Children Safe in Education, and have a significant influence on future DfE guidelines.
Appropriate filtering - key changes
Update 1: The Illegal Online Content section has been expanded to include specific categories, in order to align with the Online Safety Act.
Detail: The expanded definition now includes content relating to intimate image abuse, controlling or coercive behaviour, extreme sexual violence and pornography, fraud, racially or religiously aggravated public order offences, inciting violence, illegal immigration and people smuggling, sexual exploitation, selling illegal drugs and promoting or facilitating suicide. This is in addition to the existing definition which covered child sexual abuse and terrorism.
Update 2: The Illegal Online Content section clarifies that schools and colleges cannot disable IWF or CTIRU blocklists.
Detail: The guidance now specifies that in addition to confirming that these blocklists are included with their filtering system, schools and colleges must ensure that nobody (including system administrators) can disable these blocklists or remove any items from them.
Update 3: The Inappropriate Online Content section now includes additional ‘Primary Priority Content’ and ‘Priority Content’ (as described by the Online Safety Act) categories.
Detail: Schools and colleges should be satisfied that their filtering system manages the following content:
- Harmful content - that which is bullying, abusive or hateful and content depicting or encouraging serious violence or injury, or dangerous stunts and challenges, including exposure to harmful substances.
- Mis/disinformation - that which promotes or spreads false or misleading information intended to deceive, manipulate or harm. This includes content undermining trust in factual information or institutions.
- Violence Against Women and Girls (VAWG) - that which promotes or glorifies abuse, coercion, or harmful stereotypes targeting women and girls, including content that normalises gender-based violence or perpetuates misogyny.
This is designed to comply with the Online Safety Act and ensure that education settings are addressing evolving online harms.
Update 4: In the Filtering System Features section, the description of Contextual Content Filtering has been expanded.
Detail: It clarifies the requirement for filtering systems to be able to analyse AI-generated and user-generated content in real-time. For example, dynamic filtering of responses from ChatGPT.
This reflects the increasing use of generative AI and the challenges of filtering dynamic, evolving content.
Update 5: In the Filtering Systems Feature section, the description of Deployment has been revised.
Detail: In light of advances in technical and security standards, the guidelines acknowledge that relying on network-level filters alone may be increasingly challenging and less effective for schools and colleges. Settings are now advised to consider a hybrid approach that combines network-level filtering with device-level configurations, to ensure filtering is effective across devices and locations.
Update 6: In the Filtering Systems Feature section, the description of Identification has been expanded.
Detail: It now specifies that filtering systems should identify both users and devices to apply effective user-level filtering. This can support the application of age-appropriate filtering, personalised restrictions, and better safeguarding through user-level tracking.
Update 7: In the Filtering Systems Feature section Safeguarding Case Management Integration has been added as a feature.
Detail: This new section emphasises the ability for filtering systems to integrate with safeguarding and wellbeing platforms, to improve contextual understanding of activities flagged as a potential risk.
Update 8: New section added: Generative AI Technologies - covering schools and colleges’ risk assessments for AI tools.
Detail: It encourages schools and colleges to assess generative AI platforms before approving their use by students and staff, taking into consideration:
- The level to which filtering systems block AI content in real time.
- Any built-in safety features of AI tools and data protection implications.
- The need for a policy around the use of generative AI systems.
- Assessment of the school/college’s ability to generate reports of the use of AI tools within their setting.
Links to further government guidance on adopting AI tools in education settings are provided.
Appropriate monitoring - key changes
Update 1: The Monitoring Content section has been expanded to include the same Illegal Online Content and Primary Priority, Priority or Inappropriate Content definitions used in the filtering guidelines.
Detail: See updates 1 and 3 of the Appropriate Filtering section above.
Update 2: The Monitoring Strategy/System Features section now includes a segment on Identification.
Detail: This aligns with the guidelines for appropriate filtering, specifying that monitoring systems should identify users and devices in order to attribute activity and enable the application of appropriate configurations and restrictions for individual users.
Update 3: The Monitoring Strategy/System Features section now includes a segment on Mobile and App Content.
Detail: This aligns with the guidelines for appropriate filtering, emphasising that schools and colleges should be clear about the capability of their monitoring system to operate across mobile devices and app content. Providers must be clear about any system limitations in this area, including configuration or component requirements to achieve this.
Update 4: In the Monitoring Strategy/System Features section, the segment on Remote Monitoring has been expanded.
Detail: It clarifies that monitoring should focus on school-owned and managed devices. In the case of shared devices, schools must confirm that users log in individually. This ensures that monitoring systems apply restrictions and configurations based on user profiles, for improved safeguarding.
Update 5: The Monitoring Strategy/System Features section now includes a segment on Safeguarding Case Management Integration.
Detail: This aligns with the guidelines for appropriate filtering, focusing on the ability of monitoring systems to integrate with safeguarding and wellbeing platforms, to improve contextual understanding of activities flagged as a potential risk.
Update 6: New section added: Generative AI Technologies - covering schools and colleges’ risk assessments for AI tools.
Detail: This aligns with the guidelines for appropriate filtering, encouraging schools and colleges to assess generative AI platforms before approving their use by students and staff, taking into consideration:
- The level to which AI content can be monitored in real time.
- Any built-in safety features of AI tools and data protection implications.
- The need for a policy around the use of generative AI systems.
- Assessment of the school/college’s ability to generate reports of the use of AI tools within their setting.
Links to further government guidance on adopting AI tools in education settings are provided.
What should schools, colleges and MATs do next?
The UK SIC’s filtering and monitoring definitions serve as a reminder that education settings cannot afford to have a “set and forget” attitude when it comes to these systems.
The DfE already requires schools and colleges to review their filtering and monitoring provision “at least once every academic year”. The release of these updates presents an opportune time to perform such assessments - particularly as the UK SIC’s definitions provide an indication of what is to come in future updates to DfE guidelines.
To view the definitions in full, visit the UK SIC website. For further information on the wider legislation changes influencing these updates, schools and colleges are recommended to read the Online Safety Act.
Smoothwall provides schools, colleges and MATs with advanced digital monitoring and UK SIC accredited web filtering. To book a short demo of either solution for your setting, contact us at enquiries@smoothwall.com. We're ready to help.