Change log

The University of Manchester

Source-check timeline, source snapshot hashes, claim review state, and a diff-style preview of current source-backed claim evidence.

Change summary

Current public record freshness and review state.

The University of Manchester currently has 12 source-backed claim records and 6 official source attributions. Latest tracked changed date: May 10, 2026.

This tracker is not legal advice, not academic integrity advice, and not an official university statement unless a linked source is the university's own official page.

Claim/evidence diff preview

Diff-style preview built from current public claim/evidence records. Full old/new source diffs require paired historical snapshots.

The University of Manchester current policy evidence

Inserted lines represent current public claim and evidence records in the source-backed dataset.

+20-0
11 # The University of Manchester AI policy record
2+ai_tool_treatment: The University of Manchester does not ban generative AI. The university's position is that when used appropriately, AI tools have the potential to enhance teaching and learning, and can support inclusivity and accessibility.
3+Evidence (en, d77102fc162f): The University position is that, when used appropriately, AI tools have the potential to enhance teaching and learning and can support inclusivity and accessibility. Output from AI systems must be treated in the same manner by staff and students as work created by another person or persons, used critically and with permitted license, and cited and acknowledged appropriately.
4+teaching: Manchester has adopted five core principles for AI use: transparency, accountability, competence, responsible use, and respect. All staff and students using or developing AI are personally responsible for adhering to these.
5+Evidence (en, d77102fc162f): The University has adopted the following principles, building on existing frameworks for academic integrity and emerging guidance on AI. All staff and students using or developing AI are personally responsible for adhering to these.
6+ai_tool_treatment: The University of Manchester is the world's first university to deploy equitable Microsoft 365 Copilot access and training across its entire community, with 65,000 colleagues and students receiving the full M365 Copilot suite.
7+Evidence (en, 50d483be513f): The University has announced a strategic collaboration with Microsoft, making Manchester the world's first university to deploy equitable Microsoft 365 Copilot access and training across its entire community. This means 65,000 colleagues and students will benefit from the full Microsoft 365 Copilot suite.
8+privacy: Copilot Chat is available to everyone with a University of Manchester account. The university has a contractual agreement with Microsoft ensuring prompts and uploaded files are private, protected by the same security and encryption as emails and OneDrive. The AI system does not learn from user prompts or data.
9+Evidence (en, 50d483be513f): Copilot Chat is an AI-powered assistant available to everyone with a University account... Your prompts and uploaded files are private – no one else can see them and they are protected by the same security and encryption techniques as your emails and the contents of your OneDrive or SharePoint Sites. Copilot Chat's AI system does not learn from your prompts or data.
10+academic_integrity: Submitting work created by generative AI as one's own, or misrepresenting understanding of the subject, is plagiarism at Manchester and will be dealt with in accordance with the University's Academic Malpractice Procedure.
11+Evidence (en, d77102fc162f): Submitting work created by Generative AI as their own, or to misrepresent their understanding of the subject, is plagiarism and will be dealt with in accordance with the University's Academic Malpractice Procedure.
12+academic_integrity: Tools to detect AI-generated content are unreliable and biased and cannot be relied on to identify academic malpractice in summative assessment at Manchester. Output from such tools cannot currently be used as evidence of malpractice.
13+Evidence (en, d77102fc162f): Tools to detect AI-generated content are unreliable and biased and cannot be relied on to identify academic malpractice in summative assessment. Output from such tools cannot currently be used as evidence of malpractice.
14+academic_integrity: Students at Manchester must cite or acknowledge the outputs of generative AI tools when they use them in their work, including quoting, summarising, paraphrasing, editing, translating, data processing, re-writing, and idea generation.
15+Evidence (en, 923e793def31): You must cite or acknowledge the outputs of generative A.I. tools when you use them in your work. Quotating, summarising and paraphrasing, editing, translating, data processing, re-writing your work and the generation of ideas.
16+ai_tool_treatment: University-approved enterprise AI tools (Microsoft Copilot Chat for all, M365 Copilot via licences) must be used whenever there is a risk of inappropriate disclosure. Free-to-use public AI services should only be used with extreme caution due to data disclosure risks.
17+Evidence (en, d77102fc162f): Many generative AI tools are available as either free-to-use public services or paid-for enterprise services. Free-to-use public services often store user input, potentially disclosing the content to third parties. They should only be used with extreme caution... University-approved enterprise AI tools must be used whenever there is a risk of inappropriate disclosure.
18+teaching: With approval at School level, Manchester's default AI position may be broadened or narrowed for specific course units or assignments. Students must be given detailed information explaining the rationale and what is and is not allowed.
19+Evidence (en, d77102fc162f): With approval at School level, this position may be broadened or narrowed for specific Course Units or assignments to encourage, require, or disallow specific uses of AI. In such cases students must be given detailed information that explains the rationale for the variation from the default position, as well as what is and is not allowed.
20+teaching: Using an AI tool to correct grammar or spelling is acceptable at Manchester, but students should ensure that use of the tool does not result in substantive changes to the content or meaning of their work.
21+Evidence (en, d77102fc162f): Using an AI tool to correct grammar or spelling is acceptable, but where a student uses an AI tool for proofreading work submitted for assessment, they should ensure that use of the tool does not result in substantive changes to the content or meaning of their work.

Claim changes

12 claim records

ai_tool_treatment

The University of Manchester does not ban generative AI. The university's position is that when used appropriately, AI tools have the potential to enhance teaching and learning, and can support inclusivity and accessibility.

Review: Agent reviewedConfidence98%Evidence1Languagesen

teaching

Manchester has adopted five core principles for AI use: transparency, accountability, competence, responsible use, and respect. All staff and students using or developing AI are personally responsible for adhering to these.

Review: Agent reviewedConfidence98%Evidence1Languagesen

ai_tool_treatment

The University of Manchester is the world's first university to deploy equitable Microsoft 365 Copilot access and training across its entire community, with 65,000 colleagues and students receiving the full M365 Copilot suite.

Review: Agent reviewedConfidence98%Evidence1Languagesen

privacy

Copilot Chat is available to everyone with a University of Manchester account. The university has a contractual agreement with Microsoft ensuring prompts and uploaded files are private, protected by the same security and encryption as emails and OneDrive. The AI system does not learn from user prompts or data.

Review: Agent reviewedConfidence98%Evidence1Languagesen

academic_integrity

Submitting work created by generative AI as one's own, or misrepresenting understanding of the subject, is plagiarism at Manchester and will be dealt with in accordance with the University's Academic Malpractice Procedure.

Review: Agent reviewedConfidence98%Evidence1Languagesen

academic_integrity

Tools to detect AI-generated content are unreliable and biased and cannot be relied on to identify academic malpractice in summative assessment at Manchester. Output from such tools cannot currently be used as evidence of malpractice.

Review: Agent reviewedConfidence98%Evidence1Languagesen

academic_integrity

Students at Manchester must cite or acknowledge the outputs of generative AI tools when they use them in their work, including quoting, summarising, paraphrasing, editing, translating, data processing, re-writing, and idea generation.

Review: Agent reviewedConfidence98%Evidence1Languagesen

ai_tool_treatment

University-approved enterprise AI tools (Microsoft Copilot Chat for all, M365 Copilot via licences) must be used whenever there is a risk of inappropriate disclosure. Free-to-use public AI services should only be used with extreme caution due to data disclosure risks.

Review: Agent reviewedConfidence95%Evidence1Languagesen

teaching

With approval at School level, Manchester's default AI position may be broadened or narrowed for specific course units or assignments. Students must be given detailed information explaining the rationale and what is and is not allowed.

Review: Agent reviewedConfidence95%Evidence1Languagesen

teaching

Using an AI tool to correct grammar or spelling is acceptable at Manchester, but students should ensure that use of the tool does not result in substantive changes to the content or meaning of their work.

Review: Agent reviewedConfidence95%Evidence1Languagesen

research

Manchester recognises the potential of AI to power research and innovation. Any use of AI to generate data should be completely transparent. Using AI to fabricate or manipulate data without clear declaration constitutes research misconduct.

Review: Agent reviewedConfidence95%Evidence1Languagesen

ai_tool_treatment

The University recommends Microsoft Copilot for AI-related work. It is GDPR-compliant and protects University and personal data. Staff should always carefully consider whether adding personal information into an AI tool is necessary.

Review: Agent reviewedConfidence95%Evidence1Languagesen

Source snapshots

6 source attributions