Change log

University of Cambridge

Source-check timeline, source snapshot hashes, claim review state, and a diff-style preview of current source-backed claim evidence.

Change summary

Current public record freshness and review state.

University of Cambridge currently has 12 source-backed claim records and 6 official source attributions. Latest tracked changed date: May 5, 2026.

This tracker is not legal advice, not academic integrity advice, and not an official university statement unless a linked source is the university's own official page.

Claim/evidence diff preview

Diff-style preview built from current public claim/evidence records. Full old/new source diffs require paired historical snapshots.

University of Cambridge current policy evidence

Inserted lines represent current public claim and evidence records in the source-backed dataset.

+20-0
11 # University of Cambridge AI policy record
2+academic_integrity: A student using any unacknowledged content generated by artificial intelligence within a summative assessment as though it is their own work constitutes academic misconduct, unless explicitly stated otherwise in the assessment brief.
3+Evidence (en, 6161308c973d): A student using any unacknowledged content generated by artificial intelligence within a summative assessment as though it is their own work constitutes academic misconduct, unless explicitly stated otherwise in the assessment brief.
4+privacy: Staff must avoid inputting confidential, sensitive or personal information into GenAI tools unless warranted and only in accordance with guidance. Inputting data into a free or unlicensed GenAI tool could be considered equivalent to putting it into the public domain, signifying a potential personal data breach.
5+Evidence (en, 3ae7dc73509c): Regardless of the work being undertaken, it is recommended that staff avoid inputting confidential, sensitive or personal information into GenAI tools unless warranted and only in accordance with this guidance.
6+procurement: The University's standard licensed GenAI tools are Microsoft 365 Copilot, Google Gemini, and Google NotebookLM. Use of other licensed GenAI tools is not prohibited but must be procured in accordance with applicable procurement policy, including completion of risk assessments such as DPIAs and/or ISRAs. The public, free versions of Copilot, Gemini and NotebookLM must not be used for University activities.
7+Evidence (en, 3ae7dc73509c): The University’s standard licensed GenAI tools are Copilot, Gemini and NotebookLM, and these are the tools that should be used to process personal data, where necessary, for which the University is responsible.
8+ai_tool_treatment: All GenAI outputs must be thoroughly evaluated by a human being before they are used. Use of GenAI must be acknowledged if it makes a significant and unrevised contribution to a substantive or impactful piece of work. Staff are responsible for ensuring any use of GenAI is conducted reasonably, lawfully and in conjunction with relevant University policies.
9+Evidence (en, 3ae7dc73509c): Risk mitigation(s): Ensure that all GenAI outputs are thoroughly evaluated by a human being before they are used. Ensure use of GenAI is acknowledged if it is used to make a significant and unrevised contribution to a substantive or impactful piece of work such as the production of content for formal policies or strategic reports.
10+academic_integrity: Staff should not rely on AI detection software as it is not proven to be accurate or reliable and provides no evidence to support investigations into the use of GenAI.
11+Evidence (en, 6161308c973d): Not relying on AI detection software as it is not proven to be accurate or reliable and provides no evidence to support investigations into the use of GenAI.
12+teaching: The University of Cambridge broadly permits the appropriate use of GenAI tools and related software. Students are permitted to make appropriate use of GenAI tools to support their personal study, research and formative work. Staff are permitted to make appropriate use of GenAI tools to support their own work.
13+Evidence (en, ed37b015a563): The University of Cambridge broadly permits the appropriate use of GenAI tools and related software, however, due to the variety of disciplines and research areas present at the institution, there is need for more nuanced guidance at local levels.
14+privacy: Data input into the University's licensed versions of Copilot, Gemini and NotebookLM is not used to train those tools. Inputting data into free or unlicensed GenAI tools could result in data being used for training, which may not be a lawful use of personal data.
15+Evidence (en, 3ae7dc73509c): Information input into GenAI tools is also often used to train those tools, which may not be a lawful use of personal data – especially if that data cannot be retrieved or deleted, for example from an AI neural network. Data input into the University’s licensed versions of Copilot, Gemini and NotebookLM is not used to train those tools.
16+source_status: Cambridge provides an AI Policy Framework for triposes, departments, faculties, and colleges to determine their own local allowance for the use of AI, rather than a single university-wide AI policy. The framework is adapted from a policy proposal by Dr Claire Benn and Dr John Burden from the Leverhulme Centre for the Future of Intelligence.
17+Evidence (en, 302c375ca4d8): Given the wide variety of subjects and teaching and learning styles at the University of Cambridge, it would be difficult to provide a policy that accurately represents the multitude of ambitions, considerations, and feelings surrounding the use of AI in education. We instead will be providing a framework for triposes, departments, faculties, and colleges, to determine their own local allowance and rational for the use of AI within their own contexts.
18+security_review: The University's Information Security Risk Assessment (ISRA) and Data Protection Impact Assessment (DPIA) processes remain the relevant risk assessment processes for GenAI use. The Acceptable Use Policy (AUP) continues to apply, including compliance monitoring and enforcement provisions, when using GenAI tools.
19+Evidence (en, 3ae7dc73509c): In the limited circumstances where a formal risk assessment is required, the University Data Protection Impact Assessment (DPIA) and Information Security Risk Assessment (ISRA) processes remain the relevant risk assessments processes to follow.
20+teaching: When using GenAI tools, users should: remain aware of privacy and data implications and not share anything personal or sensitive; understand ethical implications as tools often have limited attribution; acknowledge use of GenAI if it makes a significant contribution to substantive work; and take responsibility for ensuring use is conducted reasonably, lawfully, and in conjunction with relevant University policies.
21+Evidence (en, ed37b015a563): Remain aware of the potential privacy and data implications in using tools without due care. Some tools may store or otherwise use information provided to train their language models and you should not share anything personal or sensitive. Acknowledge use of GenAI if it is used to make a significant and unrevised contribution to a substantive or impactful piece of work. Take responsibility for ensuring any use of GenAI is conducted reasonably, lawfully, and in conjunction with relevant University policies and procedures.

Claim changes

12 claim records

academic_integrity

A student using any unacknowledged content generated by artificial intelligence within a summative assessment as though it is their own work constitutes academic misconduct, unless explicitly stated otherwise in the assessment brief.

Review: Agent reviewedConfidence98%Evidence1Languagesen

privacy

Staff must avoid inputting confidential, sensitive or personal information into GenAI tools unless warranted and only in accordance with guidance. Inputting data into a free or unlicensed GenAI tool could be considered equivalent to putting it into the public domain, signifying a potential personal data breach.

Review: Agent reviewedConfidence95%Evidence2Languagesen

procurement

The University's standard licensed GenAI tools are Microsoft 365 Copilot, Google Gemini, and Google NotebookLM. Use of other licensed GenAI tools is not prohibited but must be procured in accordance with applicable procurement policy, including completion of risk assessments such as DPIAs and/or ISRAs. The public, free versions of Copilot, Gemini and NotebookLM must not be used for University activities.

Review: Agent reviewedConfidence95%Evidence2Languagesen

ai_tool_treatment

All GenAI outputs must be thoroughly evaluated by a human being before they are used. Use of GenAI must be acknowledged if it makes a significant and unrevised contribution to a substantive or impactful piece of work. Staff are responsible for ensuring any use of GenAI is conducted reasonably, lawfully and in conjunction with relevant University policies.

Review: Agent reviewedConfidence95%Evidence2Languagesen

academic_integrity

Staff should not rely on AI detection software as it is not proven to be accurate or reliable and provides no evidence to support investigations into the use of GenAI.

Review: Agent reviewedConfidence95%Evidence1Languagesen

teaching

The University of Cambridge broadly permits the appropriate use of GenAI tools and related software. Students are permitted to make appropriate use of GenAI tools to support their personal study, research and formative work. Staff are permitted to make appropriate use of GenAI tools to support their own work.

Review: Agent reviewedConfidence95%Evidence2Languagesen

privacy

Data input into the University's licensed versions of Copilot, Gemini and NotebookLM is not used to train those tools. Inputting data into free or unlicensed GenAI tools could result in data being used for training, which may not be a lawful use of personal data.

Review: Agent reviewedConfidence92%Evidence1Languagesen

source_status

Cambridge provides an AI Policy Framework for triposes, departments, faculties, and colleges to determine their own local allowance for the use of AI, rather than a single university-wide AI policy. The framework is adapted from a policy proposal by Dr Claire Benn and Dr John Burden from the Leverhulme Centre for the Future of Intelligence.

Review: Agent reviewedConfidence92%Evidence1Languagesen

security_review

The University's Information Security Risk Assessment (ISRA) and Data Protection Impact Assessment (DPIA) processes remain the relevant risk assessment processes for GenAI use. The Acceptable Use Policy (AUP) continues to apply, including compliance monitoring and enforcement provisions, when using GenAI tools.

Review: Agent reviewedConfidence90%Evidence2Languagesen

teaching

When using GenAI tools, users should: remain aware of privacy and data implications and not share anything personal or sensitive; understand ethical implications as tools often have limited attribution; acknowledge use of GenAI if it makes a significant contribution to substantive work; and take responsibility for ensuring use is conducted reasonably, lawfully, and in conjunction with relevant University policies.

Review: Agent reviewedConfidence85%Evidence1Languagesen

teaching

Examiners are not permitted to upload, copy, or share student work with Generative AI tools and Large Language Models. Examiners may not use tools such as ChatGPT, Perplexity, Google Bard, or Microsoft Copilot to analyse student work or provide written feedback. Examiners may use these tools to support their own writing in documenting feedback (e.g. consolidating personal notes and rephrasing comments).

Review: Needs reviewConfidence95%Evidence1Languagesen

privacy

Staff must consider the purpose for which data was collected before inputting it into GenAI tools. Data must be collected, held and used for only that purpose. It is not appropriate to use personal data collected for a different purpose with a GenAI tool. Automated decision-making involving GenAI requires human involvement in the decision-making process.

Review: Needs reviewConfidence90%Evidence1Languagesen

Source snapshots

6 source attributions