New online risks impacting students are emerging all the time. As a result, Department for Education (DfE) guidance is regularly updated to make schools, colleges and MATs aware of the latest threats they need to address.
In this series, Smoothwall provides quick guides to digital risks that have recently appeared in DfE guidance, to help educators navigate the changing safeguarding landscape.
This article covers AI deepfakes. Discover what they are, how they pose a risk to students, and what settings can do to mitigate the threat and maintain compliance with DfE guidelines.
Relationships Education, Relationships and Sex Education (RSE) and Health Education Statutory Guidance [for introduction 1st September 2026]
Secondary relationships and sex education curriculum content, page 15:
“[Students should learn] about the prevalence of deepfakes, including videos and photos, how deepfakes can be used maliciously as well as for entertainment, the harms that can be caused by deepfakes and how to identify them.”
Teaching about the law, page 36:
“Pupils should be made aware of the relevant legal provisions (...) relating to: online behaviours including image and information sharing (including sexual imagery, youth-produced sexual imagery, nudes, etc, and including AI-generated sexual imagery and deepfakes).”
Deepfakes are created when AI technology is used to digitally manipulate images, videos and audio files to mimic the resemblance of a particular person - often to make it appear as though the person is making statements or performing in a way they never did in reality. They are referred to as “deepfakes” because it is the deep learning technology behind certain forms of AI that enables the creation of deepfakes.
By utilising features such as face-swapping, lip-sync alignment, voice cloning and motion transfer, AI deepfakes can be highly convincing.
AI deepfakes can be used maliciously to misrepresent, humiliate or threaten people. Within schools and colleges, their creation can be a form of bullying.
For example, Student A may use a video of Student B to produce a clip in which Student B appears to be making outrageous or embarrassing statements. There are also known cases of students using deepfakes to target teachers.
Deepfakes can cause significant emotional distress and have a long-term impact on a person’s digital footprint - for instance, if a video of them appearing to make racist comments is circulated on social media, it can impact their job prospects.
Students who create deepfakes are also at risk of breaking the law. Images or videos depicting minors in a manner that could be deemed sexual is considered an offence under the Child Protection Act, and perpetrators may face criminal investigation and prosecution.