Change log

University of Birmingham

Source-check timeline, source snapshot hashes, claim review state, and a diff-style preview of current source-backed claim evidence.

Change summary

Current public record freshness and review state.

University of Birmingham currently has 14 source-backed claim records and 10 official source attributions. Latest tracked changed date: May 13, 2026.

This tracker is not legal advice, not academic integrity advice, and not an official university statement unless a linked source is the university's own official page.

Claim/evidence diff preview

Diff-style preview built from current public claim/evidence records. Full old/new source diffs require paired historical snapshots.

University of Birmingham current policy evidence

Inserted lines represent current public claim and evidence records in the source-backed dataset.

+20-0
11 # University of Birmingham AI policy record
2+academic_integrity: For assessments and assignments, students should assume generative AI use is not permitted unless the assessment or assignment explicitly states otherwise.
3+Evidence (en, e3477cdf3dfc): Unless explicitly stated otherwise, students should assume that the use of generative AI within an assessment or assignment is not permitted.
4+security_review: The University states that generative AI detection tools are not currently allowable and that student work should not be uploaded to generative AI detection software.
5+Evidence (en, e3477cdf3dfc): Tools designed to detect the use of generative AI are currently known to produce both false positives and false negatives. At present, the use of any such tools within the University is not allowable and no student work should be uploaded to generative AI detection software.
6+academic_integrity: For AI-supported marking and feedback, the University says all decisions, outcomes, and feedback must be reviewed by academic staff before release to students, and generative AI tools alone cannot allocate marks and student grades.
7+Evidence (en, 85b28c0834c8): All decisions, outcomes and feedback must be reviewed first by an academic member of staff before they are released to students. The use of generative AI tools on their own to allocate marks and student grades is not allowed.
8+privacy: The research AI guidance says personal, confidential, or sensitive data must not be entered into AI tools without clear justification, data minimisation, and a Data Protection Impact Assessment where applicable.
9+Evidence (en, a666e661d48b): Personal, confidential, or sensitive data must not be entered into AI tools without: clear justification (including consideration of locally-hosted versus cloud-based tools), data minimisation, Data Protection Impact Assessments (where applicable).
10+ai_tool_treatment: The student guidance allows use of generative AI tools as study aids for personal learning and research, while distinguishing that from submitting AI-generated output as the student's own assessment work.
11+Evidence (en, 69cc1b8d6ceb): The University's framework does allow you to use Generative AI tools as study aids for your personal learning and in your research. You are permitted to use these tools in this context, as long as you do not submit the actual AI-generated output as your own work for assessment.
12+research: The research AI guidance applies to University of Birmingham researchers using, developing, or deploying AI, and places accountability for substantive claims, interpretations, and outputs on human researchers.
13+Evidence (en, a666e661d48b): This guidance applies to all researchers at the University of Birmingham who engage with Artificial Intelligence (AI) in the context of research, whether by using existing tools, developing new models, or deploying AI systems in real-world environments. A human researcher must be accountable for every substantive claim, interpretation, and output.
14+teaching: University of Birmingham maintains a generative AI framework for teaching, learning, assessment, and support.
15+Evidence (en, e3477cdf3dfc): This guidance provides a framework for the implementation and use of generative AI models within teaching, learning, assessment, and support at the University of Birmingham.
16+teaching: Academic staff are expected to state whether and how generative AI tools are permitted in assessments or assignments, including in course outlines, briefs, Canvas pages, and handbooks.
17+Evidence (en, e3477cdf3dfc): Within all modules, academic staff should clearly articulate if, and to what extent, the use of generative AI tools is permitted within assessments or assignments by students: This should be detailed within the course outline and all assessment and assignment briefs.
18+teaching: University-wide AI marking principles allow academic staff to use AI systems to support assessment, grading, moderation and feedback after appropriate approval, while academic staff remain responsible for academic judgements and feedback.
19+Evidence (en, 85b28c0834c8): From the 1 September 2024, and upon the appropriate approval being first received, academic staff can utilise AI systems to support the assessment, grading and moderation of student work along with the provision of individualised student feedback. Where such tools are used, academic staff remain responsible for the academic judgements made on submitted student work and for any feedback they provide for learners.
20+academic_integrity: For PGT dissertations, students should not submit AI-generated material or content unless the School specifically permits it, and permitted use must follow the University framework and be referenced.
21+Evidence (en, 0a408a47ec31): Students should not submit, within any part of their PGT dissertation, material or content that has been generated by AI tools unless their use has been specifically permitted by the School. Where the use of generative AI tools is permitted, the University's Framework for the Introduction and Use of Generative Artificial Intelligence within Teaching, Learning and Assessment must be followed and students required to appropriately reference its use.

Claim changes

14 claim records

academic_integrity

For assessments and assignments, students should assume generative AI use is not permitted unless the assessment or assignment explicitly states otherwise.

Review: Agent reviewedConfidence96%Evidence1Languagesen

security_review

The University states that generative AI detection tools are not currently allowable and that student work should not be uploaded to generative AI detection software.

Review: Agent reviewedConfidence96%Evidence1Languagesen

academic_integrity

For AI-supported marking and feedback, the University says all decisions, outcomes, and feedback must be reviewed by academic staff before release to students, and generative AI tools alone cannot allocate marks and student grades.

Review: Agent reviewedConfidence96%Evidence1Languagesen

privacy

The research AI guidance says personal, confidential, or sensitive data must not be entered into AI tools without clear justification, data minimisation, and a Data Protection Impact Assessment where applicable.

Review: Agent reviewedConfidence96%Evidence1Languagesen

ai_tool_treatment

The student guidance allows use of generative AI tools as study aids for personal learning and research, while distinguishing that from submitting AI-generated output as the student's own assessment work.

Review: Agent reviewedConfidence95%Evidence1Languagesen

research

The research AI guidance applies to University of Birmingham researchers using, developing, or deploying AI, and places accountability for substantive claims, interpretations, and outputs on human researchers.

Review: Agent reviewedConfidence95%Evidence1Languagesen

teaching

University of Birmingham maintains a generative AI framework for teaching, learning, assessment, and support.

Review: Agent reviewedConfidence94%Evidence1Languagesen

teaching

Academic staff are expected to state whether and how generative AI tools are permitted in assessments or assignments, including in course outlines, briefs, Canvas pages, and handbooks.

Review: Agent reviewedConfidence94%Evidence1Languagesen

teaching

University-wide AI marking principles allow academic staff to use AI systems to support assessment, grading, moderation and feedback after appropriate approval, while academic staff remain responsible for academic judgements and feedback.

Review: Agent reviewedConfidence94%Evidence1Languagesen

academic_integrity

For PGT dissertations, students should not submit AI-generated material or content unless the School specifically permits it, and permitted use must follow the University framework and be referenced.

Review: Agent reviewedConfidence93%Evidence1Languagesen

security_review

The research AI guidance says researchers should use University-endorsed AI tools for licensing, data protection, and information security compliance, and should justify and record use of unapproved or externally hosted tools.

Review: Agent reviewedConfidence93%Evidence1Languagesen

procurement

The researcher tool-selection guidance points researchers to University-approved Enterprise Microsoft Copilot access and tells them to confirm the Enterprise data protection indicator before using it.

Review: Agent reviewedConfidence91%Evidence1Languagesen

procurement

The AI tools licensing guidance tells users to review terms and conditions before registering for a new AI tool and to seek advice when data protection, accessibility, indemnity, or copyright concerns arise.

Review: Agent reviewedConfidence89%Evidence1Languagesen

privacy

When AI-supported marking and feedback practices are used, student-facing information should explain why and how AI tools are used, human oversight, academic judgement, and privacy concerns about student data or work.

Review: Agent reviewedConfidence88%Evidence1Languagesen

Source snapshots

10 source attributions