New online risks impacting students are emerging all the time. As a result, Department for Education (DfE) guidance is regularly updated to make schools, colleges and MATs aware of the latest threats they need to address.
In this series, Smoothwall provides quick guides to digital risks that have recently appeared in DfE guidance, to help educators navigate the changing safeguarding landscape.
This article covers AI deepfakes. Discover what they are, how they pose a risk to students, and what settings can do to mitigate the threat and maintain compliance with DfE guidelines.
AI deepfakes in DfE guidance
Relationships Education, Relationships and Sex Education (RSE) and Health Education Statutory Guidance [for introduction 1st September 2026]
Secondary relationships and sex education curriculum content, page 15:
“[Students should learn] about the prevalence of deepfakes, including videos and photos, how deepfakes can be used maliciously as well as for entertainment, the harms that can be caused by deepfakes and how to identify them.”
Teaching about the law, page 36:
“Pupils should be made aware of the relevant legal provisions (...) relating to: online behaviours including image and information sharing (including sexual imagery, youth-produced sexual imagery, nudes, etc, and including AI-generated sexual imagery and deepfakes).”
What are AI deepfakes?
Deepfakes are created when AI technology is used to digitally manipulate images, videos and audio files to mimic the resemblance of a particular person - often to make it appear as though the person is making statements or performing in a way they never did in reality. They are referred to as “deepfakes” because it is the deep learning technology behind certain forms of AI that enables the creation of deepfakes.
By utilising features such as face-swapping, lip-sync alignment, voice cloning and motion transfer, AI deepfakes can be highly convincing.
How do they pose a risk?
AI deepfakes can be used maliciously to misrepresent, humiliate or threaten people. Within schools and colleges, their creation can be a form of bullying.
For example, Student A may use a video of Student B to produce a clip in which Student B appears to be making outrageous or embarrassing statements. There are also known cases of students using deepfakes to target teachers.
Deepfakes can cause significant emotional distress and have a long-term impact on a person’s digital footprint - for instance, if a video of them appearing to make racist comments is circulated on social media, it can impact their job prospects.
Students who create deepfakes are also at risk of breaking the law. Images or videos depicting minors in a manner that could be deemed sexual is considered an offence under the Child Protection Act, and perpetrators may face criminal investigation and prosecution.
What can be done to protect students from AI deepfakes?
- Establish clear AI policies
Schools and colleges should consider establishing clear AI policies, that detail, among other things, which AI tools (if any) are permitted in the setting, who is allowed to use them, and for what purpose. These policies can prohibit the creation of deepfakes, and outline the penalties for breaching such rules. - Educate students on the risks of AI deepfakes
Digital safety lessons should incorporate discussions of the harms caused by specific risks like AI deepfakes. Students need to be aware of the consequences of creating deepfakes, as well as how and where to report any deepfakes they encounter. - Prevent access to AI tools used to create deepfakes
Web filters can restrict access to specific AI tools known to create deepfakes. When new tools or sites that support deepfakes are identified, blocklists can be updated to maintain protection and minimise potential for access to such tools. - Identify potential cases of AI deepfakes early
Digital monitoring systems can identify instances of AI tools being used to create deepfakes. By registering keystrokes and flagging words that indicate a safeguarding concern, monitoring systems may notify staff when students discuss or search for deepfake software in a bullying context.
Essential Resources to Help You Navigate AI with Confidence
Schools and colleges can find further information on AI, including Department for Education requirements and step-by-step guides to building AI policies, on our new resource page: Navigating AI with Confidence.
Visit page