Birmingham, United Kingdom

University of Birmingham

University of Birmingham is listed as QS 2026 rank 76. University of Birmingham has 14 source-backed AI policy claim records from 10 official source attributions. The public record preserves original-language evidence snippets, source URLs, snapshot hashes, confidence, and review state.

Short answer

v1 public contract

University of Birmingham is listed as QS 2026 rank 76. University of Birmingham has 14 source-backed AI policy claim records from 10 official source attributions. The public record preserves original-language evidence snippets, source URLs, snapshot hashes, confidence, and review state.

Citation-ready summary

As of this public record, University AI Policy Tracker lists University of Birmingham as an agent-reviewed AI policy record last checked on May 13, 2026 and last changed on May 13, 2026. The record contains 14 source-backed claims, including 14 reviewed claims, from 10 official source attributions. Original-language evidence snippets and source URLs remain canonical, with public JSON available at https://eduaipolicy.org/api/public/v1/universities/university-of-birmingham.json. The entity-level confidence is 96%. This tracker is not legal advice, not academic integrity advice, and not an official university statement unless the linked source is the university's own official page.

Claim coverage14 reviewedSource languageenPublic JSON/api/public/v1/universities/university-of-birmingham.json

Policy signals in this record

  • Evidence includes Academic integrity claims.
  • Evidence includes Security review claims.
  • Evidence includes Privacy claims.
  • Evidence includes AI tool treatment claims.
  • Evidence includes Research claims.
  • Evidence includes Teaching claims.
  • Evidence includes Procurement claims.
  • Named AI services detected in public claims: Microsoft Copilot.
Policy statusReviewed evidence-backed recordReview: Agent reviewedEvidence-backed claims14Reviewed14Candidate0Official sources10

This reference record summarizes visible public data only. Official sources and original-language evidence remain canonical; confidence is separate from review state.

This page is not legal advice, not academic integrity advice, and not an official university statement unless a linked source is the university's own official page.

Policy profile

Deterministic source-backed dimensions derived from this record's public claims.

Coverage score85/100Coverage labelbroad public coverageReview: Machine candidateAnalysis confidence80%

Policy profile rows are machine-candidate derived metadata. They are not final policy conclusions; inspect the linked claim evidence before reuse.

Analysis page-quality metadata is available at /api/public/v1/analysis/page-quality.json.

AI disclosure

No source-backed public claim about AI disclosure or acknowledgement is present in this profile.

The current public tracker record does not contain claim evidence about disclosing, acknowledging, citing, or declaring AI use.

Not MentionedMachine candidateConfidence0%Evidence0Sources0

Coverage score measures breadth of public, source-backed coverage only. It is not a policy quality, strictness, legal adequacy, safety, or compliance score.

Evidence-backed claims

14 reviewed evidence-backed public claim

Academic Integrity

For assessments and assignments, students should assume generative AI use is not permitted unless the assessment or assignment explicitly states otherwise.

Review: Agent reviewedConfidence96%

Normalized value: assessment_genai_not_permitted_unless_explicit

Original evidence

Evidence 1
Unless explicitly stated otherwise, students should assume that the use of generative AI within an assessment or assignment is not permitted.

Security Review

The University states that generative AI detection tools are not currently allowable and that student work should not be uploaded to generative AI detection software.

Review: Agent reviewedConfidence96%

Normalized value: ai_detection_tools_not_allowable_currently

Original evidence

Evidence 1
Tools designed to detect the use of generative AI are currently known to produce both false positives and false negatives. At present, the use of any such tools within the University is not allowable and no student work should be uploaded to generative AI detection software.

Academic Integrity

For AI-supported marking and feedback, the University says all decisions, outcomes, and feedback must be reviewed by academic staff before release to students, and generative AI tools alone cannot allocate marks and student grades.

Review: Agent reviewedConfidence96%

Normalized value: ai_alone_not_allowed_to_allocate_grades

Original evidence

Evidence 1
All decisions, outcomes and feedback must be reviewed first by an academic member of staff before they are released to students. The use of generative AI tools on their own to allocate marks and student grades is not allowed.

Privacy

The research AI guidance says personal, confidential, or sensitive data must not be entered into AI tools without clear justification, data minimisation, and a Data Protection Impact Assessment where applicable.

Review: Agent reviewedConfidence96%

Normalized value: research_ai_personal_confidential_sensitive_data_controls

Original evidence

Evidence 1
Personal, confidential, or sensitive data must not be entered into AI tools without: clear justification (including consideration of locally-hosted versus cloud-based tools), data minimisation, Data Protection Impact Assessments (where applicable).

Ai Tool Treatment

The student guidance allows use of generative AI tools as study aids for personal learning and research, while distinguishing that from submitting AI-generated output as the student's own assessment work.

Review: Agent reviewedConfidence95%

Normalized value: study_aids_allowed_but_not_submission_as_own_work

Original evidence

Evidence 1
The University's framework does allow you to use Generative AI tools as study aids for your personal learning and in your research. You are permitted to use these tools in this context, as long as you do not submit the actual AI-generated output as your own work for assessment.

Research

The research AI guidance applies to University of Birmingham researchers using, developing, or deploying AI, and places accountability for substantive claims, interpretations, and outputs on human researchers.

Review: Agent reviewedConfidence95%

Normalized value: research_ai_guidance_scope_and_human_accountability

Original evidence

Evidence 1
This guidance applies to all researchers at the University of Birmingham who engage with Artificial Intelligence (AI) in the context of research, whether by using existing tools, developing new models, or deploying AI systems in real-world environments. A human researcher must be accountable for every substantive claim, interpretation, and output.

Teaching

University of Birmingham maintains a generative AI framework for teaching, learning, assessment, and support.

Review: Agent reviewedConfidence94%

Normalized value: university_framework_for_teaching_learning_assessment_support

Original evidence

Evidence 1
This guidance provides a framework for the implementation and use of generative AI models within teaching, learning, assessment, and support at the University of Birmingham.

Teaching

Academic staff are expected to state whether and how generative AI tools are permitted in assessments or assignments, including in course outlines, briefs, Canvas pages, and handbooks.

Review: Agent reviewedConfidence94%

Normalized value: assessment_permissions_should_be_clearly_communicated

Original evidence

Evidence 1
Within all modules, academic staff should clearly articulate if, and to what extent, the use of generative AI tools is permitted within assessments or assignments by students: This should be detailed within the course outline and all assessment and assignment briefs.

Teaching

University-wide AI marking principles allow academic staff to use AI systems to support assessment, grading, moderation and feedback after appropriate approval, while academic staff remain responsible for academic judgements and feedback.

Review: Agent reviewedConfidence94%

Normalized value: ai_supported_grading_requires_approval_and_academic_responsibility

Original evidence

Evidence 1
From the 1 September 2024, and upon the appropriate approval being first received, academic staff can utilise AI systems to support the assessment, grading and moderation of student work along with the provision of individualised student feedback. Where such tools are used, academic staff remain responsible for the academic judgements made on submitted student work and for any feedback they provide for learners.

Academic Integrity

For PGT dissertations, students should not submit AI-generated material or content unless the School specifically permits it, and permitted use must follow the University framework and be referenced.

Review: Agent reviewedConfidence93%

Normalized value: pgt_dissertation_ai_generated_content_requires_school_permission

Original evidence

Evidence 1
Students should not submit, within any part of their PGT dissertation, material or content that has been generated by AI tools unless their use has been specifically permitted by the School. Where the use of generative AI tools is permitted, the University's Framework for the Introduction and Use of Generative Artificial Intelligence within Teaching, Learning and Assessment must be followed and students required to appropriately reference its use.

Security Review

The research AI guidance says researchers should use University-endorsed AI tools for licensing, data protection, and information security compliance, and should justify and record use of unapproved or externally hosted tools.

Review: Agent reviewedConfidence93%

Normalized value: researchers_should_use_university_endorsed_ai_tools

Original evidence

Evidence 1
Researchers should use only University-endorsed AI tools to ensure compliance with licensing, data protection, and information security requirements. Use of unapproved or externally hosted tools should be justified and recorded in project documentation such as Data Management Plans or ethics submissions.

Procurement

The researcher tool-selection guidance points researchers to University-approved Enterprise Microsoft Copilot access and tells them to confirm the Enterprise data protection indicator before using it.

Review: Agent reviewedConfidence91%

Normalized value: enterprise_copilot_access_and_data_protection_indicator

Original evidence

Evidence 1
The University provides approved access to the Enterprise version of Microsoft Co-Pilot which you can access via your University account. Ensure the green shield labelled Enterprise data protection applies to this chat is showing so that you know you are safely using the Enterprise version.

Procurement

The AI tools licensing guidance tells users to review terms and conditions before registering for a new AI tool and to seek advice when data protection, accessibility, indemnity, or copyright concerns arise.

Review: Agent reviewedConfidence89%

Normalized value: ai_tool_terms_review_before_registration

Original evidence

Evidence 1
Before registering for a new tool, it is crucial to review the Terms and Conditions (licence), which form the legal agreement between you and the supplier. If issues arise, such as non-compliance with accessibility or data protection standards, indemnification requirements, or copyright concerns, seek further guidance from your local licensing or IT procurement teams before registering for the service.

Privacy

When AI-supported marking and feedback practices are used, student-facing information should explain why and how AI tools are used, human oversight, academic judgement, and privacy concerns about student data or work.

Review: Agent reviewedConfidence88%

Normalized value: student_notice_should_cover_ai_marking_privacy_and_human_oversight

Original evidence

Evidence 1
Where AI supported marking and feedback practices and used, clear information should be provided to students on their use that details why, and how, AI tools are being used; emphasises that there remains human oversight; and addresses any privacy concerns that students might have about their data or work being uploaded to AI tools.

Candidate claims

0 machine or needs-review claim

Candidate claims are not final policy conclusions. They preserve source URL, source snapshot hash, evidence, confidence, and review state so the record can be audited before review.

Official sources

10 source attribution

Change log

Source-check timeline and diff-style claim/evidence preview.

View the public change record for this university, including source snapshot hashes, claim review states, and a diff-style preview of current source-backed evidence.

Last checkedMay 13, 2026Last changedMay 13, 2026Open change log

Corrections and missing evidence

Corrections create review tasks and do not directly change this public record.

If an official source is missing, stale, moved, blocked, or incorrectly summarized, submit a source URL, policy change report, or institution correction for review. Corrections must preserve source URLs, source language, original evidence, review state, and audit history.

Back to universities