DeepSeek is an open source generative AI tool that can perform a range of tasks to increase productivity and improve knowledge and understanding. However, the implications and potential risks presented by AI technology are yet to be fully realised. As a result, tools like DeepSeek need to be approached with a healthy dose of caution.
As DeepSeek AI becomes increasingly popular with students, many schools are debating how to regulate its use. This article explains what DeepSeek is, how it differs from ChatGPT, and the risks it poses to students, to help educators make an informed decision for their setting.
What is DeepSeek AI?
DeepSeek AI is a large language model (LLM) developed by a Chinese technology company of the same name. In January 2025 it launched the DeepSeek chatbot, which quickly surpassed its main rival, ChatGPT, in app downloads across countries including the USA, the UK, China, Canada and Australia.
How does it work?
LLMs are machines that are fed huge quantities of data, including text and images, in order to train them to recognise patterns. This process enables the machines to learn and solve problems and create new content at speeds far beyond human capabilities.
LLM technology can be used to power chatbots, the most common form of DeepSeek AI used by students. Chatbots converse with users in the style of a person and can help with tasks such as content creation, coding, summarising complex texts, research and problem solving.
The DeepSeek chatbot is accessed via a website, by downloading the DeepSeek app, or through third-party platforms such as Hugging Face.
How does DeepSeek differ from AI tools such as ChatGPT?
On the surface, DeepSeek looks and performs exactly like ChatGPT, but there are some key differences. A driving force behind DeepSeek’s immediate popularity is the fact that it is cheaper and more efficient to run than ChatGPT, because it uses what is known as a mixture of experts (MoE) architecture.
In simple terms, LLMs are made up of parameters, which are the internal variables the model uses to produce responses. When you ask ChatGPT a question, it uses all of its parameters to process a response, which takes a huge amount of computational power. In contrast, DeepSeek organises its parameters into subsets known as experts. When responding to prompts, it only calls on the most relevant “expert” to produce a response, saving significant computing time and cost.
Imagine a visitor arrives at a school and asks a question about the safeguarding policy. The ChatGPT architecture would consult every member of staff available and then produce an answer. To improve efficiency, DeepSeek’s architecture employs a classifier, which acts as the school receptionist. Rather than asking all staff, the receptionist can direct the visitor straight to the school DSL for a response.
Unlike ChatGPT, DeepSeek is open source, meaning its source code is public. This allows developers outside of the company to explore, replicate and modify the computer software on which it runs for their own purposes.
Finally, ChatGPT was created by the US company, OpenAI, whereas DeepSeek hails from China. This has raised concerns in the West around security and its potential links to the Chinese Government.
What risks does it pose to students and school networks?
Security and privacy concerns
One should be cautious when sharing any information with an AI tool, as that data can be viewed and stored by the company behind it.
It is unclear how DeepSeek manages users' personal information, aside from the fact that it is stored on servers located in China. Keep in mind that Chinese law states that companies must "support, assist and cooperate" with its state intelligence agencies, so any data entered into DeepSeek could be accessed by these authorities.
DeepSeek is viewed as a potential national security threat by some countries. For example, Australia has banned its use on government devices and systems, and in late January this year the app was blocked in Italy, with the government demanding that DeepSeek stop processing the personal information of its citizens.
From a school perspective, leaders may want to consider the security implications of DeepSeek being accessed on school networks and devices. At the very least, students should be warned against sharing personal or sensitive information with any AI tool.
Mistakes and bias
While DeepSeek may serve as a useful research and planning assistant, the content it generates needs to be fact-checked. As is the case with all AI, it may provide incorrect or faulty information. In fact, an audit by NewsGuard put its fail rate as high as 83%.
The content produced by AI models also has the potential for bias, as they will replicate social biases evident in the content they are fed during the training process. In the case of DeepSeek, early tests found that it avoided or refused to engage with topics that are politically sensitive in China, including Tiananmen Square and the status of Taiwan.
The potential for mistakes and bias means that if used as an educational tool, DeepSeek may negatively impact student learning. As well as causing basic factual errors, it could shape student views on certain topics if they are yet to develop the critical thinking skills to identify bias.
Exposure to harmful content
Generative AI has the ability to produce harmful and inappropriate content, and DeepSeek is no exception. A study by US AI security company Enkrypt actually concluded that DeepSeek was 11 times more likely to generate harmful output compared to OpenAI models. This included content related to or classified as extremist propaganda, controlled substances and illegal weapons.
While generative AI tools usually have some form of safety measure in place to prevent the production of harmful content, these controls can often be manipulated by carefully-worded prompts. In the case of DeepSeek, there appears to be a particularly concerning lack of guardrails, which presents a significant safeguarding issue for schools.
Understanding how DeepSeek works, and educating school communities on the potential risks it poses, is key to protecting student safety and wellbeing.