Plotting a Safe Route...
Building an Effective AI Policy

While education settings have flexibility in shaping their AI approach, they still need to abide by their legal and statutory obligations, including
Keeping children safe in education and data protection laws.

An effective AI policy should do more than list what’s allowed and what’s not. It should also consider:

Ethical use

Does the AI policy support learning without encouraging shortcuts, bias, or misuse?

Staff and student training

Do our staff and students understand when, why and how to use AI tools safely and effectively?
Have you equipped them to think critically about AI outputs?

Safeguarding implications

What risks could AI pose to student safety or wellbeing?
Have we considered how AI tools might expose students to harmful content or mask warning signs?

Policy alignment

Does the AI policy align with related policies, such as: staff code of conduct, student acceptable use
policies and data protection policies (including GDPR)?



Choosing the right AI tools for use

Not all AI tools are created equal. They vary in purpose, functionality and risk - and the landscape is constantly shifting.
That’s why policies should clearly set out:

  • Which AI tools are permitted
  • Who can use them
  • What they can be used for

 

For example:

Are tools like ChatGPT only allowed for teacher use?
Can students use AI tools for specific subjects or tasks?
Are certain tools or features restricted by year group?

 

The more specific the policy, the easier it will be to enforce and review.

 

Defining responsible use


Where AI use is permitted, it’s vital to set clear boundaries on what can and cannot be shared with these tools.

 

Intellectual property

Lesson plans, homework, and essays remain the property of the person who created them.
Uploading this content to an AI tool could breach IP laws.

Personal data

Staff and student data must not be shared with AI tools unless clear consent has been obtained.
The DfE recommends avoiding this altogether to ensure compliance with data protection laws.

Terms and age restrictions

Every AI tool comes with its own terms of use. These should be reviewed and incorporated into your policy - especially any age restrictions.




 

4 steps to writing a successful AI policy

 

Establishing a clear AI policy may feel complex, but taking action now ensures your setting is prepared, protected, and positioned to respond confidently as AI continues to evolve.

Start with a basic foundation and view it as an evolving policy that can be updated as new information or technology is introduced. 

The following steps may help:

 

STEP 1: Discuss policy development as an SLT and get DSLs involved early

STEP 2: Review your setting’s current AI usage (both official and unofficial) 

STEP 3: Collaborate with IT staff to understand any cybersecurity risks and training needs.

STEP 4: Stay informed of updates from the DfE and other trusted sources

Once your policy is in place, it’s essential to make sure staff and students understand it - along with the reasons behind it and the consequences of not following it.

KCSIE-2025-Webinar---Header

 

Who should take ownership of your AI policy?

 

Responsibility for formulating an AI policy or strategy shouldn’t rest with one individual. Given the fast-moving nature of the technology, it’s important to involve all relevant stakeholders in the creation and ongoing development of your AI policy. This ensures shared understanding, collective ownership, and consistent application across your setting.

Appointing an AI champion


Many early adopters of AI in schools, colleges and MATs have found it valuable to appoint an “AI champion” - a member of staff who can support colleagues in building their understanding of AI and help drive confident, informed adoption. This doesn’t need to be an IT specialist; it could be any staff member with a strong interest or knowledge in AI.