11 # University of Oxford AI policy record
2+academic_integrity: Staff setting summative assessment must: declare whether/how students can use AI; review assessment design for alignment with permitted AI use; ensure equality of baseline AI tool provision where authorised; specify declaration forms for student AI use; only identify suspected unauthorised AI use through marking or university-endorsed detection tools (none currently endorsed); and handle misconduct under usual disciplinary regulations.
3+Evidence (en, dc16c703b231): Those setting summative assessment must
- declare whether and how students can use AI in summative assessment, e.g. by a category system for different assignments on courses.
- review their summative assessment design and criteria by task to ensure alignment with the permitted use of AI
- where students are authorised to use AI tools for their summative assessments, ensure that there is an equality of baseline provision of appropriate AI tools.
- specify the forms of declaration expected of students
- only identify suspected unauthorised use of AI in summative assessment through the marking process or through AI detection tools that have university endorsement. NB. as at the date of the l...
4+academic_integrity: Oxford requires postgraduate research students to include a statement on their use of generative AI in their final thesis submission.
5+Evidence (en, 6919aedf3348): Students are required to include a statement on their use of Gen AI in their final submitted thesis. This is effective as of submission in Trinity Term 2026, but it is recommended that such a statement is included in every thesis submitted from the point of publication of this guidance. The statement should be placed immediately after the abstract. The statement must include a formal declaration that any Gen AI use complies with University, divisional and (where applicable) departmental guidance, where and how Gen AI has been used in preparation of the thesis and summarising how specific uses of Gen AI will be referenced in the text
6+academic_integrity: Students undertaking summative assessment must: complete assessment in line with the AI use declaration for each assignment; acknowledge their AI use via a formal declaration in the prescribed format; and understand that submitting work breaching AI specifications constitutes cheating and may constitute plagiarism, handled under usual disciplinary regulations.
7+Evidence (en, dc16c703b231): Students undertaking summative assessment must
- complete summative assessment in line with the declaration as to whether and how AI can be used in each specific assignment they will complete for their course.
- acknowledge their use of AI as part of the summative assessment submission and use a formal declaration in the format prescribed by the assessment setter.
- be aware that submitting work that breaches the specifications defined for a particular assignment constitutes cheating and may constitute plagiarism; cases of suspected unauthorised use of AI will be handled under the usual disciplinary Regulations and using the associated processes.
8+academic_integrity: The University's policy on AI use in summative assessment is based on three principles endorsed by Education Committee in Trinity term 2025: (1) educational practice must be grounded in values of integrity, honesty and transparency, which must be clearly articulated and frequently discussed; (2) every discrete unit of assessment must be carefully designed to be fit for its specific purposes, clearly articulated to students; (3) every summative assessment must be accompanied by a clear explanation of what appropriate assistance is permitted and what is forbidden, specifying how students should report assistance received.
9+Evidence (en, dc16c703b231): The policy is based on three principles for acceptable use of AI which were endorsed by Education Committee during Trinity term 2025:
- Educational practice in teaching and assessment must be grounded in values of integrity, honesty and transparency. These values need to be clearly articulated and frequently discussed.
- Every discrete unit of assessment must be carefully designed to be fit for its specific purposes. These purposes need to be clearly articulated to students.
- Every element of summative assessment must be accompanied by a clear explanation of what appropriate assistance is permitted and what is forbidden. Where assistance is permitted, the assignment should specify exactl...
10+other: All cloud-based generative AI tools must be subject to a security risk assessment before being used with University information. Free and open-source services generally cannot complete a full assessment and should not be used for confidential information.
11+Evidence (en, 9bbf71bffc82): Before any processing of Internal or Confidential information using generative AI services, the following steps must be taken to mitigate risk.
1. As with all service providers holding or processing university information, information supplied to the tool in the form of questions or other artefacts is typically stored by the third-party service provider and is subject to the threats from cyber criminals and other malicious actors, such as hostile nation states. Therefore, all cloud-based Generative AI tools should be subject to a security risk assessment before being used. The Information Security GRC Team has a TPSA tool to help complete an assessment. It is generally not possible to com...
12+other: ChatGPT Edu and Google Gemini, when licensed via the AI Competency Centre, have been approved for processing of Confidential University data by the Information Security team. University data processed through these licensed platforms will not be used to train AI models. Confidential data must only be used with the University's approved, SSO-protected platforms.
13+Evidence (en, 9bbf71bffc82): ChatGPT Edu, licensed via the AI Competency Centre, has been approved for processing of Confidential University data by the Information Security team. University data processed by ChatGPT Edu under an AI Competency Centre licence will not be used to train the AI model.
14+academic_integrity: For PGR students, the following uses of generative AI are not permitted in summative assessments: substantive original writing by GenAI (verbatim or closely paraphrased for chapters or parts thereof) which constitutes plagiarism; using AI to produce plots or data visualisations directly from prompts; and entering private or confidential data into third-party AI tools.
15+Evidence (en, 6919aedf3348): Substantive original writing by Gen AI, including either verbatim or closely paraphrased use of Gen AI content, for, e.g., chapters, or parts of chapters, including introduction or conclusion chapters or for a literature review, would fall under the definition of plagiarism or be otherwise a failure of research integrity and is therefore not permissible.
The use of generative AI to produce plots or data visualisations directly from prompts is prohibited.
Private or confidential data must not be entered into third-party AI tools.
16+other: External custom GPTs should not be used to process confidential University data or sensitive personal data. No non-public University data (including confidential, internal, or personal data) may be incorporated in any custom GPT shared with external users.
17+Evidence (en, 9bbf71bffc82): External custom GPTs should not be used to process confidential University data or sensitive personal data. If you are considering inputting internal University data to an external custom GPT, please discuss this with the Information Security GRC team in advance. Any inputting of personal data to an external custom GPT should also be discussed with the Information Compliance team.
18+other: Unapproved AI transcription bots should not be used in Teams meetings. The inbuilt Teams Transcription facility or Microsoft Copilot may be used subject to appropriate data protection considerations. Other AI transcription bot services should be avoided, and meeting organisers should set options to prevent participants from adding unapproved transcription bots.
19+Evidence (en, 9bbf71bffc82): There should be no use of unapproved AI transcription bots in Teams meetings by any participants.
It is permissible to record meetings using the inbuilt Teams Transcription facility or Microsoft's Copilot subject to appropriate data protection considerations.
Use of any AI transcription bot services other than the inbuilt Teams Transcription or Microsoft's Copilot should be avoided.
Meeting options should be set by the organisers so as to prevent internal or external meeting participants from adding unapproved transcription bots.
20+academic_integrity: For PGR summative assessment (transfer, confirmation, thesis), the following AI uses are permitted without declaration: local editing tools (grammar assistants, spell-checkers, code debuggers making small local changes); AI for background research, language translation, bibliography creation, and general subject understanding; and AI for coding where coding serves a research purpose but is not the substantive output. Students remain responsible for correctness of any code used.
21+Evidence (en, 6919aedf3348): The use of local editing tools—such as grammar assistants, code debuggers, and spell-checkers—is permitted and need not be declared. These tools only make small, local changes (for example, fixing spelling, grammar, or small pieces of code), usually affecting just a few words or tokens at a time.
The use of AI tools for background research, language translation, creation of bibliography indices and general subject understanding is allowed and does not have to be declared.
Use of Gen AI for coding purposes is permitted, where the coding serves a purpose in the research but is not the substantive output of the project.