Cambridge, United Kingdom

University of Cambridge

University of Cambridge is listed as QS 2026 rank 6. University of Cambridge has 12 source-backed AI policy claim records from 6 official source attributions. The public record preserves original-language evidence snippets, source URLs, snapshot hashes, confidence, and review state.

Citation-ready overview

v1 public contract

University of Cambridge is listed as QS 2026 rank 6. University of Cambridge has 12 source-backed AI policy claim records from 6 official source attributions. The public record preserves original-language evidence snippets, source URLs, snapshot hashes, confidence, and review state.

Reviewed claims10Candidate claims2Official sources6

Candidate claims are source-backed records pending review. They are not final policy conclusions and are not legal or academic integrity advice.

Reviewed claims

10 reviewed public claim

Academic Integrity

A student using any unacknowledged content generated by artificial intelligence within a summative assessment as though it is their own work constitutes academic misconduct, unless explicitly stated otherwise in the assessment brief.

Review: Agent reviewedConfidence98%

Normalized value: unacknowledged_ai_is_misconduct

Original evidence

Evidence 1
A student using any unacknowledged content generated by artificial intelligence within a summative assessment as though it is their own work constitutes academic misconduct, unless explicitly stated otherwise in the assessment brief.

Privacy

Staff must avoid inputting confidential, sensitive or personal information into GenAI tools unless warranted and only in accordance with guidance. Inputting data into a free or unlicensed GenAI tool could be considered equivalent to putting it into the public domain, signifying a potential personal data breach.

Review: Agent reviewedConfidence95%

Normalized value: prohibited_with_exceptions

Original evidence

Evidence 1
Regardless of the work being undertaken, it is recommended that staff avoid inputting confidential, sensitive or personal information into GenAI tools unless warranted and only in accordance with this guidance.

Original evidence

Evidence 2
By contrast, inputting data into a free or unlicensed GenAI tool could be considered equivalent to putting it into the public domain – signifying a potential personal data breach.

Procurement

The University's standard licensed GenAI tools are Microsoft 365 Copilot, Google Gemini, and Google NotebookLM. Use of other licensed GenAI tools is not prohibited but must be procured in accordance with applicable procurement policy, including completion of risk assessments such as DPIAs and/or ISRAs. The public, free versions of Copilot, Gemini and NotebookLM must not be used for University activities.

Review: Agent reviewedConfidence95%

Normalized value: licensed_tools_mandated

Original evidence

Evidence 1
The University’s standard licensed GenAI tools are Copilot, Gemini and NotebookLM, and these are the tools that should be used to process personal data, where necessary, for which the University is responsible.

Original evidence

Evidence 2
Use of other licenced GenAI tools is not prohibited, but such tools must be procured in accordance with any applicable procurement policy or process, including but not limited to the completion of any requisite risk assessments such as DPIAs and/or ISRAs.

Ai Tool Treatment

All GenAI outputs must be thoroughly evaluated by a human being before they are used. Use of GenAI must be acknowledged if it makes a significant and unrevised contribution to a substantive or impactful piece of work. Staff are responsible for ensuring any use of GenAI is conducted reasonably, lawfully and in conjunction with relevant University policies.

Review: Agent reviewedConfidence95%

Normalized value: human_review_required

Original evidence

Evidence 1
Risk mitigation(s): Ensure that all GenAI outputs are thoroughly evaluated by a human being before they are used. Ensure use of GenAI is acknowledged if it is used to make a significant and unrevised contribution to a substantive or impactful piece of work such as the production of content for formal policies or strategic reports.

Original evidence

Evidence 2
Ultimately, staff are responsible for ensuring any use of GenAI is conducted reasonably, lawfully and in conjunction with relevant University policies and procedures.

Academic Integrity

Staff should not rely on AI detection software as it is not proven to be accurate or reliable and provides no evidence to support investigations into the use of GenAI.

Review: Agent reviewedConfidence95%

Normalized value: ai_detection_not_reliable

Original evidence

Evidence 1
Not relying on AI detection software as it is not proven to be accurate or reliable and provides no evidence to support investigations into the use of GenAI.

Teaching

The University of Cambridge broadly permits the appropriate use of GenAI tools and related software. Students are permitted to make appropriate use of GenAI tools to support their personal study, research and formative work. Staff are permitted to make appropriate use of GenAI tools to support their own work.

Review: Agent reviewedConfidence95%

Normalized value: broadly_permitted

Original evidence

Evidence 1
The University of Cambridge broadly permits the appropriate use of GenAI tools and related software, however, due to the variety of disciplines and research areas present at the institution, there is need for more nuanced guidance at local levels.

Original evidence

Evidence 2
Students are permitted to make appropriate use of GenAI tools to support their personal study, research and formative work. Appropriate use is better defined locally by Department, Faculty, or College depending on the context and you should always check with a member of staff to be sure you know how you are able to use these tools for your education.

Privacy

Data input into the University's licensed versions of Copilot, Gemini and NotebookLM is not used to train those tools. Inputting data into free or unlicensed GenAI tools could result in data being used for training, which may not be a lawful use of personal data.

Review: Agent reviewedConfidence92%

Normalized value: licensed_no_training

Original evidence

Evidence 1
Information input into GenAI tools is also often used to train those tools, which may not be a lawful use of personal data – especially if that data cannot be retrieved or deleted, for example from an AI neural network. Data input into the University’s licensed versions of Copilot, Gemini and NotebookLM is not used to train those tools.

Source Status

Cambridge provides an AI Policy Framework for triposes, departments, faculties, and colleges to determine their own local allowance for the use of AI, rather than a single university-wide AI policy. The framework is adapted from a policy proposal by Dr Claire Benn and Dr John Burden from the Leverhulme Centre for the Future of Intelligence.

Review: Agent reviewedConfidence92%

Normalized value: framework_not_centralized_policy

Original evidence

Evidence 1
Given the wide variety of subjects and teaching and learning styles at the University of Cambridge, it would be difficult to provide a policy that accurately represents the multitude of ambitions, considerations, and feelings surrounding the use of AI in education. We instead will be providing a framework for triposes, departments, faculties, and colleges, to determine their own local allowance and rational for the use of AI within their own contexts.

Security Review

The University's Information Security Risk Assessment (ISRA) and Data Protection Impact Assessment (DPIA) processes remain the relevant risk assessment processes for GenAI use. The Acceptable Use Policy (AUP) continues to apply, including compliance monitoring and enforcement provisions, when using GenAI tools.

Review: Agent reviewedConfidence90%

Normalized value: isra_dpia_required

Original evidence

Evidence 1
In the limited circumstances where a formal risk assessment is required, the University Data Protection Impact Assessment (DPIA) and Information Security Risk Assessment (ISRA) processes remain the relevant risk assessments processes to follow.

Original evidence

Evidence 2
Please remember the Acceptable Use Policy (AUP) continues to apply, including its compliance monitoring and enforcement provisions, when using GenAI tools and staff are reminded of their obligation to abide by its terms.

Teaching

When using GenAI tools, users should: remain aware of privacy and data implications and not share anything personal or sensitive; understand ethical implications as tools often have limited attribution; acknowledge use of GenAI if it makes a significant contribution to substantive work; and take responsibility for ensuring use is conducted reasonably, lawfully, and in conjunction with relevant University policies.

Review: Agent reviewedConfidence85%

Normalized value: general_principles

Original evidence

Evidence 1
Remain aware of the potential privacy and data implications in using tools without due care. Some tools may store or otherwise use information provided to train their language models and you should not share anything personal or sensitive. Acknowledge use of GenAI if it is used to make a significant and unrevised contribution to a substantive or impactful piece of work. Take responsibility for ensuring any use of GenAI is conducted reasonably, lawfully, and in conjunction with relevant University policies and procedures.

Candidate claims

2 machine or needs-review claim

Candidate claims are not final policy conclusions. They preserve source URL, source snapshot hash, evidence, confidence, and review state so the record can be audited before review.

Teaching

Examiners are not permitted to upload, copy, or share student work with Generative AI tools and Large Language Models. Examiners may not use tools such as ChatGPT, Perplexity, Google Bard, or Microsoft Copilot to analyse student work or provide written feedback. Examiners may use these tools to support their own writing in documenting feedback (e.g. consolidating personal notes and rephrasing comments).

Review: Needs reviewConfidence95%

Normalized value: examiner_restrictions

This claim is held for review because the evidence or classification needs another pass.

Original evidence

Evidence 1
Examiners across all programmes of study (including Undergraduate, Postgraduate Taught, and Postgraduate Research) are not permitted to upload, copy, or share student work with Generative AI (GenAI) tools and Large Language Models (LLMs). Examiners may not use tools, such as ChatGPT, Perplexity, Google Bard, or Microsoft Copilot, to analyse work submitted by students and provide written feedback.

Privacy

Staff must consider the purpose for which data was collected before inputting it into GenAI tools. Data must be collected, held and used for only that purpose. It is not appropriate to use personal data collected for a different purpose with a GenAI tool. Automated decision-making involving GenAI requires human involvement in the decision-making process.

Review: Needs reviewConfidence90%

Normalized value: purpose_limitation

This claim is held for review because the evidence or classification needs another pass.

Original evidence

Evidence 1
Data must be collected, held and used for only that purpose – for example, as part of a governance process – and any use of AI must be compatible with that reason. With some exceptions (e.g. for academic research), it is not appropriate to use personal data that was collected for a different purpose to do something else with a GenAI tool.

Official sources

6 source attribution

Back to universities