Oxford, United Kingdom

University of Oxford

University of Oxford is listed as QS 2026 rank 4. University of Oxford has 11 source-backed AI policy claim records from 6 official source attributions. The public record preserves original-language evidence snippets, source URLs, snapshot hashes, confidence, and review state.

Citation-ready overview

v1 public contract

University of Oxford is listed as QS 2026 rank 4. University of Oxford has 11 source-backed AI policy claim records from 6 official source attributions. The public record preserves original-language evidence snippets, source URLs, snapshot hashes, confidence, and review state.

Reviewed claims11Candidate claims0Official sources6

Candidate claims are source-backed records pending review. They are not final policy conclusions and are not legal or academic integrity advice.

Reviewed claims

11 reviewed public claim

Academic Integrity

Staff setting summative assessment must: declare whether/how students can use AI; review assessment design for alignment with permitted AI use; ensure equality of baseline AI tool provision where authorised; specify declaration forms for student AI use; only identify suspected unauthorised AI use through marking or university-endorsed detection tools (none currently endorsed); and handle misconduct under usual disciplinary regulations.

Review: Agent reviewedConfidence95%

Original evidence

Evidence 1
Those setting summative assessment must - declare whether and how students can use AI in summative assessment, e.g. by a category system for different assignments on courses. - review their summative assessment design and criteria by task to ensure alignment with the permitted use of AI - where students are authorised to use AI tools for their summative assessments, ensure that there is an equality of baseline provision of appropriate AI tools. - specify the forms of declaration expected of students - only identify suspected unauthorised use of AI in summative assessment through the marking process or through AI detection tools that have university endorsement. NB. as at the date of the l...

Academic Integrity

Oxford requires postgraduate research students to include a statement on their use of generative AI in their final thesis submission.

Review: Agent reviewedConfidence95%

Original evidence

Evidence 1
Students are required to include a statement on their use of Gen AI in their final submitted thesis. This is effective as of submission in Trinity Term 2026, but it is recommended that such a statement is included in every thesis submitted from the point of publication of this guidance. The statement should be placed immediately after the abstract. The statement must include a formal declaration that any Gen AI use complies with University, divisional and (where applicable) departmental guidance, where and how Gen AI has been used in preparation of the thesis and summarising how specific uses of Gen AI will be referenced in the text

Academic Integrity

Students undertaking summative assessment must: complete assessment in line with the AI use declaration for each assignment; acknowledge their AI use via a formal declaration in the prescribed format; and understand that submitting work breaching AI specifications constitutes cheating and may constitute plagiarism, handled under usual disciplinary regulations.

Review: Agent reviewedConfidence95%

Original evidence

Evidence 1
Students undertaking summative assessment must - complete summative assessment in line with the declaration as to whether and how AI can be used in each specific assignment they will complete for their course. - acknowledge their use of AI as part of the summative assessment submission and use a formal declaration in the format prescribed by the assessment setter. - be aware that submitting work that breaches the specifications defined for a particular assignment constitutes cheating and may constitute plagiarism; cases of suspected unauthorised use of AI will be handled under the usual disciplinary Regulations and using the associated processes.

Academic Integrity

The University's policy on AI use in summative assessment is based on three principles endorsed by Education Committee in Trinity term 2025: (1) educational practice must be grounded in values of integrity, honesty and transparency, which must be clearly articulated and frequently discussed; (2) every discrete unit of assessment must be carefully designed to be fit for its specific purposes, clearly articulated to students; (3) every summative assessment must be accompanied by a clear explanation of what appropriate assistance is permitted and what is forbidden, specifying how students should report assistance received.

Review: Agent reviewedConfidence95%

Original evidence

Evidence 1
The policy is based on three principles for acceptable use of AI which were endorsed by Education Committee during Trinity term 2025: - Educational practice in teaching and assessment must be grounded in values of integrity, honesty and transparency. These values need to be clearly articulated and frequently discussed. - Every discrete unit of assessment must be carefully designed to be fit for its specific purposes. These purposes need to be clearly articulated to students. - Every element of summative assessment must be accompanied by a clear explanation of what appropriate assistance is permitted and what is forbidden. Where assistance is permitted, the assignment should specify exactl...

Other

All cloud-based generative AI tools must be subject to a security risk assessment before being used with University information. Free and open-source services generally cannot complete a full assessment and should not be used for confidential information.

Review: Agent reviewedConfidence95%

Original evidence

Evidence 1
Before any processing of Internal or Confidential information using generative AI services, the following steps must be taken to mitigate risk. 1. As with all service providers holding or processing university information, information supplied to the tool in the form of questions or other artefacts is typically stored by the third-party service provider and is subject to the threats from cyber criminals and other malicious actors, such as hostile nation states. Therefore, all cloud-based Generative AI tools should be subject to a security risk assessment before being used. The Information Security GRC Team has a TPSA tool to help complete an assessment. It is generally not possible to com...

Other

ChatGPT Edu and Google Gemini, when licensed via the AI Competency Centre, have been approved for processing of Confidential University data by the Information Security team. University data processed through these licensed platforms will not be used to train AI models. Confidential data must only be used with the University's approved, SSO-protected platforms.

Review: Agent reviewedConfidence95%

Original evidence

Evidence 1
ChatGPT Edu, licensed via the AI Competency Centre, has been approved for processing of Confidential University data by the Information Security team. University data processed by ChatGPT Edu under an AI Competency Centre licence will not be used to train the AI model.

Original evidence

Evidence 2
Google Gemini, licensed via the AI Competency Centre, has been approved for processing of Confidential University data by the Information Security team. University data processed by Google Gemini under an AI Competency Centre licence will not be used to train the AI model.

Original evidence

Evidence 3
The University provides several enterprise-grade AI tools that have passed internal Third-Party Security Assessments (TPSA). Accessing these tools via Single Sign-On (SSO) ensures that user data is not used to train external AI models. Confidential data must only be used with the University's approved, SSO-protected platforms.

Academic Integrity

For PGR students, the following uses of generative AI are not permitted in summative assessments: substantive original writing by GenAI (verbatim or closely paraphrased for chapters or parts thereof) which constitutes plagiarism; using AI to produce plots or data visualisations directly from prompts; and entering private or confidential data into third-party AI tools.

Review: Agent reviewedConfidence95%

Original evidence

Evidence 1
Substantive original writing by Gen AI, including either verbatim or closely paraphrased use of Gen AI content, for, e.g., chapters, or parts of chapters, including introduction or conclusion chapters or for a literature review, would fall under the definition of plagiarism or be otherwise a failure of research integrity and is therefore not permissible. The use of generative AI to produce plots or data visualisations directly from prompts is prohibited. Private or confidential data must not be entered into third-party AI tools.

Other

External custom GPTs should not be used to process confidential University data or sensitive personal data. No non-public University data (including confidential, internal, or personal data) may be incorporated in any custom GPT shared with external users.

Review: Agent reviewedConfidence90%

Original evidence

Evidence 1
External custom GPTs should not be used to process confidential University data or sensitive personal data. If you are considering inputting internal University data to an external custom GPT, please discuss this with the Information Security GRC team in advance. Any inputting of personal data to an external custom GPT should also be discussed with the Information Compliance team.

Original evidence

Evidence 2
Be aware that any information incorporated in a custom GPT, either as Instructions specifying the behaviour of the GPT or as Documents uploaded to the GPT, may be accessed by users of the custom GPT. No non-public University data should be incorporated in any custom GPT that you intend to share with external users. For the avoidance of doubt, this includes any confidential or internal University data, as well as any personal data.

Other

Unapproved AI transcription bots should not be used in Teams meetings. The inbuilt Teams Transcription facility or Microsoft Copilot may be used subject to appropriate data protection considerations. Other AI transcription bot services should be avoided, and meeting organisers should set options to prevent participants from adding unapproved transcription bots.

Review: Agent reviewedConfidence90%

Original evidence

Evidence 1
There should be no use of unapproved AI transcription bots in Teams meetings by any participants. It is permissible to record meetings using the inbuilt Teams Transcription facility or Microsoft's Copilot subject to appropriate data protection considerations. Use of any AI transcription bot services other than the inbuilt Teams Transcription or Microsoft's Copilot should be avoided. Meeting options should be set by the organisers so as to prevent internal or external meeting participants from adding unapproved transcription bots.

Academic Integrity

For PGR summative assessment (transfer, confirmation, thesis), the following AI uses are permitted without declaration: local editing tools (grammar assistants, spell-checkers, code debuggers making small local changes); AI for background research, language translation, bibliography creation, and general subject understanding; and AI for coding where coding serves a research purpose but is not the substantive output. Students remain responsible for correctness of any code used.

Review: Agent reviewedConfidence90%

Original evidence

Evidence 1
The use of local editing tools—such as grammar assistants, code debuggers, and spell-checkers—is permitted and need not be declared. These tools only make small, local changes (for example, fixing spelling, grammar, or small pieces of code), usually affecting just a few words or tokens at a time. The use of AI tools for background research, language translation, creation of bibliography indices and general subject understanding is allowed and does not have to be declared. Use of Gen AI for coding purposes is permitted, where the coding serves a purpose in the research but is not the substantive output of the project.

Academic Integrity

Unauthorised use of generative AI falls under the University's plagiarism regulations and is subject to academic penalties in summative assessments. Students must learn and practise academic skills of note-taking and clear attribution to differentiate their own work from AI-derived material. Where AI use is authorised, students should give clear acknowledgment of how it has been used.

Review: Agent reviewedConfidence85%

Original evidence

Evidence 1
Students using AI during their studies must learn and practise the same academic skills of note-taking and clear attribution which are safeguards against plagiarism, ensuring clear differentiation of their own work from any text or material derived from generative AI tools. Unauthorised use of AI falls under the plagiarism regulations and would be subject to academic penalties in summative assessments. Where the use of generative AI in preparing work for examination has been authorised by the department, faculty or programme, students should give clear acknowledgment of how it has been used in their work.

Candidate claims

0 machine or needs-review claim

Candidate claims are not final policy conclusions. They preserve source URL, source snapshot hash, evidence, confidence, and review state so the record can be audited before review.

Official sources

6 source attribution

AI and academic practice | Centre for Teaching and Learning

ctl.ox.ac.uk

Snapshot hash
ca36f4631166d8e1a2175fc8836878552b4a14fa98d336b0ab4ddc20435456f8
Back to universities