Smoothwall Insights

The ‘Jekyll and Hyde’ problem of AI in schools

Written by Smoothwall | Jul 11, 2025 4:03:05 PM

There’s no question that artificial intelligence is transforming the education sector – its fingerprints are now seen on everything from lesson plans to work turned in by students. It has tremendous capacity for good and efficiency, but the rate at which it has become a part of the fabric of school life has outpaced our preparedness to manage its risks.

Teachers are reporting issues ranging from plagiarism to safeguarding concerns from students’ use of AI. The education sector is now facing a Jekyll and Hyde situation – how to embrace the undeniable benefits of AI without exposing students to these risks?

This article unpicks four of the key challenges schools are facing as the use of AI increases, and the potential risks involved with its unchecked adoption.

AI hallucinations

While much of the conversations around AI in education have focused on plagiarism, a growing concern is presented in what researchers are referring to as ‘AI hallucinations’ - instances in which language models simply make up information that is not true. 

On one test, the hallucination rates of newer A.I. systems were found to be as high as 79%. Not only is content submitted by students becoming increasingly plagiarised, but much of it is inaccurate. The risk, therefore, is about much more than ‘cheating’ or ‘laziness’ - students are absorbing misinformation, and potentially spreading it without realising.

Teachers are already seeing the effects of this. Research from the National Literacy Trust found that over a third of teachers (38%) were worried about students’ use of AI. Of this 38 per cent, half believed it has the potential to stop children thinking for themselves. A further study found that 84% of teachers have not changed the way they assess their students’ work despite the prevalence of AI.

Whilst AI tools can produce convincing and often well-written content, the models they have been trained on rely on mathematics and probability - meaning the information presented is not independently verified as true or false. They do not ‘know’ what is true or false, only what is ‘most likely to’ come next. 


Built-in bias

Artificial intelligence is only as good as the data it is trained on. Many of the tools in use today will contain biased data, which means they can exacerbate and amplify existing cultural stereotypes and prejudices. Whether it is consistently defaulting to male images or pronouns for certain tasks or jobs, or failing to recognise cultural references or those living with disability, AI can subtly – and sometimes overtly – skew students’ world view.

These biases might show up in seemingly benign ways at first, but ultimately impact how students perceive the world around them. Schools are formative in student learning and are settings that prioritise fairness and representation, something that can easily be overshadowed by an AI co-author.

Teachers are right to be concerned about the potential issues of these built-in biases, and must also be equipped to challenge and correct them.

Explicit content


Perhaps most concerning of all is AI’s potential to expose students to harmful or explicit content through tools that are not properly moderated.

Qoria’s research has shown that 64% of teachers lack the necessary training, knowledge, and time to address the risks posed by AI-generated explicit content and online exploitation. A further 91% of teachers are deeply concerned about AI’s role in exploitation and grooming, particularly with the rise of AI-powered nudification apps.

These findings raise profound safeguarding concerns for schools, with AI adding a complex layer to protective technologies like filtering and monitoring of online content, integrated to keep students safe. Because AI-generated content doesn’t exist until it’s prompted, it often evades early detection—making it difficult for educators to intervene before harm is done. This evolving threat landscape demands that safeguarding strategies grow more adaptive, proactive, and tech-aware.

AI as a crutch


With its ease of use and accessibility, students are fast becoming overly dependent on AI for their schoolwork. A recent study from MIT has shown that regular use of AI tools may negatively impact critical thinking skills - suggesting that overuse has the potential to harm learning, particularly for younger users.

Over the course of several months, “ChatGPT users got lazier with each subsequent essay, often resorting to copy-and-paste by the end of the study.” While the findings are yet to be peer-reviewed, it echoes a picture already seen in schools and education establishments. Teachers are starting to notice changes in the way students structure work and their ability to make a start on tasks. 

The concern is that there is an over-reliance on AI in schools, which has a knock-on effect on students’ writing, critical thinking, and problem-solving skills. By using these language models for every task, learning gradually shifts from an active process to a passive one; a process of consumption rather than creation. 


AI’s potential to transform learning is immense; however, the risks will not go away on their own. Schools, policymakers, parents and tech providers must take shared responsibility for their safe use, as well as ensuring children and other vulnerable groups are not at increased risk.

At Smoothwall, we’re committed to helping you keep children safe and thriving in their digital lives. If you’re a school that would like to learn more about this issue or share your challenges, reach out to us via this form, and we’ll be in touch. We look forward to supporting you and your school community.

 

Essential reads hand-picked for you...