As AI platform Grok brings renewed national attention to nudification apps, our research highlights this is a much broader issue, already impacting UK schools, colleges, and MATs, and spreading far beyond a single platform.
Nudification apps use AI to generate fake, sexually explicit images by digitally altering photos of real people - often without their knowledge or consent. Tools like Grok, an AI chatbot integrated into the platform X, are part of a wider wave of generative AI technologies that can create highly realistic content quickly and with little technical skill.
With students actively using these tools to create imagery of their peers, the reality for educators is complex. AI-generated content is evolving rapidly - and this requires a faster, more coordinated response across the safeguarding ecosystem.
To better understand emerging digital risks, Qoria, Smoothwall’s parent company, surveyed more than 500 education decision makers across the UK in two studies. The findings reveal widespread concern, and real-world incidents taking place amongst students.
90.5% of educators are concerned about the potential for online predators to use AI to groom students.
26.5% identified instances where students themselves had used AI apps or tools to create child sexual abuse imagery (CSAM) or nude content such as Deepfake.
9.5% have experienced a case of a student creating a fake sexually explicit image of a classmate.
What's clear is that AI-generated explicit imagery is no longer hypothetical - it is being used peer-to-peer.
At the same time, educators are facing a broader rise in online harms with the data pointing to an increase in students experiencing social media obsession (79%), online bullying and harassment (78%), and gaming addiction (60%). Possessing harmful or toxic views (60%) and an unhealthy attachment to AI chatbots (29%) also ranked highly.
Two-thirds (68%) of schools said they experienced online safety issues on a daily or weekly basis.
In many cases, the pace of technological change is outstripping educators’ capacity to respond.
64% say limited training and lack of time are their biggest barriers when managing AI-related risks and the sharing of explicit content. 89% believe digital risks are evolving faster than their school’s ability to keep up.
Blocking individual apps is not a sustainable solution and the conversation cannot centre on one app or one headline. AI tools capable of generating realistic fake imagery are increasingly accessible, easy to use, and difficult to track.
AI-generated harm can be difficult to detect and may initially be dismissed as low-risk or “just for fun” behaviour. However, the impact on students can be severe, particularly when explicit imagery is involved.
Schools should be alert to:
Sudden circulation of manipulated or explicit imagery amongst peer groups
Increased use of AI image-editing or “nudification” apps
Behavioural changes, distress or withdrawal linked to image sharing
Students using language that normalises the creation of fake explicit content as ‘jokes’
Early identification and intervention are critical - the faster concerns are identified, the more effectively educators can intervene and support those affected.
The findings reinforce the need for a collaborative, multi-layered safeguarding approach. Ongoing digital literacy education is essential to help students understand issues of consent, manipulation and the misuse of AI tools.
At the same time, monitoring and filtering technology should be in place to surface emerging risks early, enabling timely intervention before harm escalates. Regular staff training is equally important, ensuring educators feel confident recognising and addressing AI-related harms as they evolve. Strengthening open communication between education institutions and parents further reinforces this approach, creating shared oversight and clearer pathways for early support.
AI is not inherently harmful - but without robust safeguards, education and collaboration, it can amplify risks at unprecedented scale.
Smoothwall remains committed to supporting schools with the tools and insight needed to stay ahead of evolving digital risks, and to help students thrive safely in an increasingly AI-driven world.
For a deeper look at the risks explored in this article, and practical guidance on how to respond, our findings bring together insights from schools across the UK, alongside expert recommendations for managing emerging digital harms: See the Signs and AI-enabled CSAM.