Smoothwall Insights

Department for Education Guidance on Generative AI: A Summary for SLTs & DSLs

Written by Smoothwall | May 29, 2025 9:40:13 AM

Whether education settings are ready or not, both students and teachers are now using AI tools. While generative AI has many benefits, it also presents significant safeguarding challenges. To support education settings to utilise AI tools safely, responsibly and effectively, the Department for Education (DfE) released the policy paper Generative artificial intelligence (AI) in education

This article provides a summary of key takeaways from the paper that are relevant to senior leadership teams (SLTs) and designated safeguarding leads (DSLs), and details further resources that can help with developing effective AI policies.

 

What is generative AI?


Generative AI is a form of AI that is able to produce content in the form of text, images and video at speeds far beyond human capabilities. Examples of generative AI tools include ChatGPT, Gemini, Microsoft Copilot and DeepSeek. They can be used for a range of tasks including content creation, problem-solving, research, coding and basic admin.


The DfE’s stance on generative AI in education

The DfE’s paper strikes a balance between highlighting the positive transformational impact AI can have on the education sector, whilst emphasising the need for every setting to establish clear policies to regulate its application.

In terms of opportunities, the DfE sees “more immediate benefits and fewer risks from teacher-facing use of generative AI.” In particular, AI’s ability to remove some of the administrative burdens on staff, allowing them to focus more on teaching.

The acknowledged risks posed by generative AI use in schools include infringements on data privacy and intellectual property rights, increased access to inappropriate or harmful content, and the undermining of the integrity of formal assessments. 

In response to this complex picture, the DfE directs schools and colleges to decide whether use of AI tools will be permitted in their setting and, if so, the rules defining their appropriate application. 


What to consider when forming an AI policy

While UK schools and colleges are permitted to make their own decisions on the suitable use of AI tools, they still need to abide by their legal and statutory obligations, including Keeping children safe in education and data protection laws.

AI policies in education settings should cover use, ethics, training and safeguarding. Where appropriate, they should also be cross-referenced with other policies, such as staff code of conduct and acceptable use policies for students. 

The details


AI tools vary considerably in terms of their capabilities, benefits and risks, and the technology is constantly evolving. As a result, school AI policies should be specific about which tools are permitted and for what purpose. For example:

  • Can they only be used by teachers?
  • Can they only be used for administrative tasks?
  • Can they be used by pupils but only for certain topics?
  • Is their use limited to certain year groups?


Safety


Safety is the main priority when it comes to forming AI policies, which is why DSLs need to be involved in policy development. Keep in mind that generative AI tools carry a number of implications for safeguarding, including plagiarism, misinformation, deepfakes and CSAM.

For this reason, any AI tools approved for use should undergo full risk assessments. As well as evaluating the potential benefits and risks, these assessments should include plans for how to mitigate unauthorised use. Policies should also be specific about intended use, with measures put in place to manage their application. 

DSLs can utilise digital monitoring systems to identify potential risks or breaches associated with the use of AI tools. 


Responsible use


Where AI tools are permitted, policies must clearly define what types of information can and cannot be shared with them. Lesson plans, essays and homework are all considered the intellectual property of the original author. As a result, anyone who shares such content with an AI tool risks infringing on intellectual property laws. 

Personal data, whether relating to the organisation or individual students or staff, should not be inputted to AI tools without prior agreement and understanding from those affected. The DfE recommends that personal data is not used with these tools at all, to ensure adherence to data protection laws.

AI tools have their own sets of terms and conditions, which may include age restrictions. These should be consulted and incorporated into any school policies relating to their use. 


How to get started on your AI policy


The process of establishing an AI policy can seem overwhelming, but putting regulations in place is essential, as AI tools are already being used in many settings. 

Start with a basic foundation and view it as an evolving policy that can be updated as new information or technology is introduced. 

The following steps may help:

  • Discuss policy development as an SLT and get DSLs involved early

  • Review your setting’s current AI usage (both official and unofficial) 

  • Work with IT staff to assess any cyber security implications

  • Stay informed by regularly checking for new AI-related guidance 

As with any school policy, it is important that staff and students are kept aware of AI policies, the reasoning behind them, and the consequences of not following established procedures.

For more information on AI in education and related policies, schools and colleges may find these resources useful:

Essential reads hand-picked for you...