11 # University of Kent AI policy record
2+academic_integrity: Kent's Academic Integrity and Misconduct policy defines unauthorised AI use as using generative AI beyond the scope permitted for a specific assessment, or failing to acknowledge permitted use appropriately.
3+Evidence (en, f09c5b7a8766): Unauthorised use of artificial intelligence: Use of generative AI beyond the scope permitted for the specific assessment, or failure to appropriately acknowledge its use where allowed.
4+ai_tool_treatment: The University of Kent publishes AI principles that set expectations for responsible, ethical, lawful and effective engagement with AI across education, research and professional services.
5+Evidence (en, 428e5ba1d85e): These principles set out the University’s expectations for responsible, ethical, lawful and effective engagement with AI across education, research and professional services.
6+privacy: Kent's public AI data-privacy guidance tells users not to enter personal information, confidential information, sensitive data, or other people's work into AI tools, and specifically says personal or sensitive data should never be put into ChatGPT Edu.
7+Evidence (en, d58e33807664): Furthermore, you should never put personal or sensitive data into ChatGPT Edu. For these reasons, you should not enter personal information, confidential information or other people’s work into an AI tool.
8+teaching: Kent's AI principles say academic judgement about student work remains with academic staff, and AI will not be used to make marking or academic-outcome decisions unless explicitly authorised, clearly communicated, pedagogically justified, and subject to human oversight.
9+Evidence (en, 428e5ba1d85e): AI will not be used to make decisions about marks or academic outcomes unless its use is explicitly authorised, clearly communicated, pedagogically justified, and always with human oversight.
10+academic_integrity: Kent's student AI academic-integrity guidance says that, unless specifically instructed otherwise, submitted assessment content must always be the student's own work and students should not include AI-generated material in submissions.
11+Evidence (en, 6b44a654335f): Unless it is otherwise noted, you should: 1. Not include materials generated by AI in your submissions. 2. Not submit materials that you have written but that have been substantially altered by AI.
12+research: Kent's public research guidance says researchers using GenAI to process data must follow the University Data Protection Policy, complete DPIA screening where relevant, record planned GenAI use in data management plans, and document GenAI use.
13+Evidence (en, 9dd3d40c8d2d): When using a GenAI tool to process data you must abide by the existing University of Kent Data Protection Policy, and carry out a Data Protection Impact Assessment Screening Questionnaire.
14+ai_tool_treatment: Kent's public Microsoft Copilot guidance says Copilot Chat is available to all students and staff using a Kent IT account, while warning users to double-check outputs and consult module convenors before using generative AI tools in assignments.
15+Evidence (en, 8402b6ab08f9): This tool is available to all students and staff at the University of Kent. Copilot Chat uses publicly accessible material from the internet to generate its responses.
16+ai_tool_treatment: A Kent Student News announcement says the University is collaborating with OpenAI to give all students and staff free access to ChatGPT Edu, with access for students planned for April 2026.
17+Evidence (en, 54aa712b92f3): The University of Kent is collaborating with OpenAI to give all students and staff free access to ChatGPT Edu – a version of ChatGPT built for universities.
18+source_status: The public AI Principles page says Kent's AI Policy Group is undertaking a university-wide review of current policies, so this run did not identify a completed central binding AI policy page beyond published principles and related guidance.
19+Evidence (en, 428e5ba1d85e): The AI Policy Group is undertaking a university-wide review of current policies in light of developments in AI tools and their potential application.