| Date Created | Date 1st Review Due | Date Reviewed | Version | Next Review Due |
| December 2025 | December 2026 | |||
Here at Egham Park School we understand the valuable potential that artificial intelligence (AI), including generative AI, holds for schools. For example, it can be used to enhance pedagogical methods, customise learning experiences and progress educational innovation. We are also aware of the risks posed by AI, including data protection breaches, copyright issues, ethical complications, safeguarding and compliance with wider legal obligations. Therefore, the aim of this policy is to establish guidelines for the ethical, secure and responsible use of AI technologies across our whole school community.
This policy covers the use of AI tools by school staff and pupils. This includes generative chatbots such as ChatGPT and Google Gemini (please note, this list is not exhaustive). This policy aims to:
- Support the use of AI to enhance teaching and learning
- Support staff to explore AI solutions to improve efficiency and reduce workload
- Prepare staff and pupils for a future in which AI technology will be an integral part
- Promote equity in education by using AI to address learning gaps and provide personalised support
- Ensure that AI technologies are used ethically and responsibly by all staff and pupils
- Protect the privacy and personal data of staff and pupils in compliance with the UK GDPR
2. Scope
This policy applies to:
- All staff (teaching, support, admin, leadership, agency)
- Pupils
- Volunteers and IT contractors
- Any use of AI systems on school devices, personal devices on site, or for school-related work
3. Definition of Artificial Intelligence
Artificial Intelligence (AI) refers to digital systems that:
- Generate text, images, audio, or video
- Analyse large amounts of data
- Automate decision-making
- Simulate human reasoning
Examples include:
- Chatbots (e.g., AI tutors, writing tools)
- Image and video generators
- Behaviour analysis software
- Predictive data systems
4. Key Principles for AI Use in an SEMH Setting
All AI use must be:
4.1 Safe
- Must not expose pupils to harmful, disturbing, manipulative, or age-inappropriate content
- Must not destabilise emotional regulation or wellbeing
4.2 Ethical
- No use that deceives, manipulates, or replaces human care
- No use for surveillance without lawful and transparent justification
4.3 Fair and Unbiased
- Staff must remain aware that AI can reflect bias and stereotypes
- Outputs must be checked for discrimination
4.4 Transparent
- Pupils must be told when AI is being used
- Staff must be open about how AI supports learning
4.5 Data Protected
- No personal, sensitive, or safeguarding information must ever be entered into public AI systems
5. Acceptable Uses of AI
AI may be used to:
5.1 Support Teaching and Learning
- Generate lesson ideas
- Differentiate learning materials
- Support literacy development
- Provide alternative formats for SEND pupils
5.2 Reduce Staff Workload
- Draft lesson plans
- Create revision materials
- Support report structure (not personal data)
- Prepare communication templates
- Administrative Support
- Training resources
All outputs must be reviewed by staff before use.
6. Prohibited Uses of AI
AI must NOT be used for:
- Uploading or sharing:
- Safeguarding information
- Child Protection records
- Behaviour logs
- Medical or mental health data
- Making automated behaviour, punishment, or risk decisions
- Emotional intervention without a trained adult
- Surveillance or facial/emotion recognition tools
- Creating deepfakes or deceptive material
- Replacing staff judgement
- Unsupervised pupil access
- Assessment decisions without teacher moderation
7. Pupils and AI Use
We recognise that AI has many uses to help pupils learn. Pupils may use AI tools:
- As a research tool to help them find out about new topics and ideas
- When specifically studying and discussing AI in schoolwork, for example in IT lessons or art homework about AI-generated images All AI-generated content must be properly attributed and appropriate for the pupils’ age and educational needs.
AI may also lend itself to cheating and plagiarism. To mitigate this, pupils may not use AI tools:
- During assessments, including internal and external assessments.
- To write their homework or class assignments, where AI-generated text is presented as their own work
- To complete their homework, where AI is used to answer questions set and is presented as their own work (for example, maths calculations)
- Pupils may only access AI tools under direct staff supervision
- AI use must be:
- Age-appropriate
- Regulated
- Time-limited
- Purposeful
- Pupils must be taught:
- AI is not always correct
- AI is a tool, not a friend, therapist, or authority
- How misinformation works
This list of AI misuse is not exhaustive. Where AI tools have been used as a source of information, pupils should reference their use of AI. The reference must show the name of the AI source and the date the content was generated. Pupils must consider what is ethical and appropriate in their use of AI and must not:
- Generate content to impersonate, bully or harass another person
- Generate explicit or offensive content
- Input offensive, discriminatory or inappropriate content as a prompt
8. Safeguarding and SEMH Considerations
Specific risks within an SEMH setting include:
- Emotional dependency on chatbots
- Reinforcement of negative self-beliefs
- Exposure to harmful content
- False validation of unsafe behaviours
- Social withdrawal due to digital overuse
Controls include:
- Supervision at all times
- Blocking non-approved AI tools
- Regular wellbeing monitoring
- Immediate reporting of any concerning AI interaction
Any safeguarding concern linked to AI must be reported through the school’s safeguarding procedures.
The safeguarding lead is responsible for monitoring and advising on our compliance with safeguarding requirements, including in relation to the use of AI, such as:
- Being aware of new and emerging safeguarding threats posed by AI
- Updating and delivering staff training on AI safeguarding threats
- Responding to safeguarding incidents in line with Keeping Children Safe in Education (KCSIE)
9. Data Protection and GDPR
Staff must:
- Follow UK GDPR and Data Protection Act 2018
- Never enter personal data into public AI tools
- Use only school-approved platforms
- Ensure anonymity in all examples
The school’s Data Protection Officer (DPO) must approve any new AI platform before use.
10. Staff Responsibilities
As part of our aim to reduce staff workload while improving outcomes for our pupils, we encourage staff to explore opportunities to meet these objectives through the use of approved AI tools. Any use of AI must follow the guidelines set out in this policy.
To protect data when using generative AI tools, staff must:
- Use only approved AI tools
- Seek advice from the data protection officer / IT / AI lead
- Ensure there is no identifiable information included in what they put into AI tools
- Acknowledge or reference the use of generative AI in their work
- Fact-check results to make sure the information is accurate
- Use AI professionally and ethically
- Maintain confidentiality
- Attend AI training where required
- Challenge misuse immediately
- Model responsible digital behaviour
11. Leadership Responsibilities
Senior leaders must:
- Approve AI tools
- Monitor impact
- Ensure safeguarding compliance
- Review training needs
12. Training and Awareness
The school will provide:
- Training on AI safety
- Pupil digital safety education
- Updates on emerging risks
13. Misuse of AI
Misuse may result in:
- Disciplinary action (staff)
- Behaviour sanctions (pupils)
- Referral to external agencies if required
14. Policy Review
This policy will be reviewed:
- Annually
- Or sooner due to legal, safeguarding, or technological developments
15. Linked Policies
- Safeguarding & Child Protection
- Online Safety
- Data Protection
- Behaviour Policy
- Staff Code of Conduct
- SEND Policy
All staff play a role in ensuring that pupils understand the potential benefits and risks of using AI in their learning.
All of our staff have a responsibility to guide pupils in critically evaluating AI-generated information and understanding its limitation