Durham, United Kingdom

Durham University

Durham University is listed as QS 2026 rank =94. Durham University has 11 source-backed AI policy claim records from 4 official source attributions. The public record preserves original-language evidence snippets, source URLs, snapshot hashes, confidence, and review state.

Short answer

v1 public contract

Durham University is listed as QS 2026 rank =94. Durham University has 11 source-backed AI policy claim records from 4 official source attributions. The public record preserves original-language evidence snippets, source URLs, snapshot hashes, confidence, and review state.

Citation-ready summary

As of this public record, University AI Policy Tracker lists Durham University as an agent-reviewed AI policy record last checked on May 14, 2026 and last changed on May 14, 2026. The record contains 11 source-backed claims, including 11 reviewed claims, from 4 official source attributions. Original-language evidence snippets and source URLs remain canonical, with public JSON available at https://eduaipolicy.org/api/public/v1/universities/durham-university.json. The entity-level confidence is 99%. This tracker is not legal advice, not academic integrity advice, and not an official university statement unless the linked source is the university's own official page.

Claim coverage11 reviewedSource languageen-GBPublic JSON/api/public/v1/universities/durham-university.json

Policy signals in this record

  • Evidence includes Academic integrity claims.
  • Evidence includes Source status claims.
  • Evidence includes AI tool treatment claims.
  • Evidence includes Teaching claims.
  • No specific AI service name is highlighted by the current public claim text.
  • Disclosure, acknowledgment, citation, or attribution language appears in the public claim text.
  • Teaching, assessment, coursework, or syllabus-related language appears in the public claim text.
Policy statusReviewed evidence-backed recordReview: Agent reviewedEvidence-backed claims11Reviewed11Candidate0Official sources4

This reference record summarizes visible public data only. Official sources and original-language evidence remain canonical; confidence is separate from review state.

This page is not legal advice, not academic integrity advice, and not an official university statement unless a linked source is the university's own official page.

Policy profile

Deterministic source-backed dimensions derived from this record's public claims.

Coverage score85/100Coverage labelbroad public coverageReview: Machine candidateAnalysis confidence82%

Policy profile rows are machine-candidate derived metadata. They are not final policy conclusions; inspect the linked claim evidence before reuse.

Analysis page-quality metadata is available at /api/public/v1/analysis/page-quality.json.

Privacy and data entry

No source-backed public claim about privacy or data-entry restrictions is present in this profile.

The current public tracker record does not contain claim evidence about personal, confidential, sensitive, regulated, or student data entry into AI tools.

Not MentionedMachine candidateConfidence0%Evidence0Sources0

Research guidance

No source-backed public claim about research AI use is present in this profile.

The current public tracker record does not contain claim evidence about research use, publication ethics, research data, grants, or human-subjects compliance.

Not MentionedMachine candidateConfidence0%Evidence0Sources0

Security and procurement

No source-backed public claim about AI security review or procurement is present in this profile.

The current public tracker record does not contain claim evidence about security review, procurement, vendor approval, risk assessment, authentication, SSO, or enterprise licensing.

Not MentionedMachine candidateConfidence0%Evidence0Sources0

Coverage score measures breadth of public, source-backed coverage only. It is not a policy quality, strictness, legal adequacy, safety, or compliance score.

Evidence-backed claims

11 reviewed evidence-backed public claim

Academic Integrity

For Common Awards summative assessments, Durham guidance says students must not use generative AI to create substantive content that they present as their own creation.

Review: Agent reviewedConfidence99%

Original evidence

Evidence 1
You must not use generative AI to create substantive content for your assessed work that you then present as if it were your own creation.

Source Status

Durham Common Awards AI academic-misconduct policy is scoped to students' use of generative AI in summative assessments on Common Awards modules.

Review: Agent reviewedConfidence98%

Original evidence

Evidence 1
It applies to students' use of generative AI in summative assessments on Common Awards modules. Its only purpose is to define which uses of generative AI count as academic misconduct in that context.

Academic Integrity

For Common Awards students, Durham guidance says students must not provide generative AI with others' material unless it is public-domain material, permitted material, or protected from training use.

Review: Agent reviewedConfidence98%

Original evidence

Evidence 1
You must not provide a generative AI with any text or other material produced by others, unless that material is in the public domain, or you have explicit permission to do so, or you have confirmation that the content will not be used to train the AI in question.

Academic Integrity

The Durham Common Awards page says its AI policy requires students to paste a completed AI declaration into summative assignments before submission.

Review: Agent reviewedConfidence97%

Original evidence

Evidence 1
The policy requires students to copy and paste a completed AI declaration into summative assignments before submitting them.

Ai Tool Treatment

Durham Global Opportunities guidance says using generative AI in Global Opportunities applications is unadvisable and may negatively affect an application.

Review: Agent reviewedConfidence97%

Original evidence

Evidence 1
It is unadvisable to use generative AI and it may negatively affect your application.

Ai Tool Treatment

Durham Common Awards guidance says some limited uses of generative AI do not count as academic misconduct if work remains the student's own, AI use is acknowledged where required, and caution is demonstrated.

Review: Agent reviewedConfidence96%

Original evidence

Evidence 1
In general, however, other limited uses of generative AI to facilitate your work do not count as academic misconduct, provided that the resulting work still reflects your own engagement with your sources, your own understanding, and your own reasoning and judgments; you clearly acknowledge any use of AI that has substantially informed the content or presentation of your work; and you demonstrate appropriate caution about the limitations of the tools you use.

Ai Tool Treatment

Durham Global Opportunities guidance says asking an AI tool to proofread in British English would be appropriate where the original text was generated by the human applicant.

Review: Agent reviewedConfidence95%

Original evidence

Evidence 1
Asking an AI tool to 'proof read in British English' would be appropriate use (just as asking a friend or relative to proof read would be) as the original 'generation' of the text was by the human applicant.

Source Status

Durham's public DCAD generative-AI resources page lists an internal Institutional Policy on Generative Artificial Intelligence for Learning, Teaching and Assessment dated June 2025.

Review: Agent reviewedConfidence94%

Original evidence

Evidence 1
Institutional Policy on Generative Artificial Intelligence for Learning, Teaching and Assessment, June 2025

Teaching

DCAD assessment guidance says marking criteria should be reviewed alongside assessment redesign in light of generative AI.

Review: Agent reviewedConfidence94%

Original evidence

Evidence 1
It is also evident that, whether learning outcomes change significantly or not, marking criteria should be reviewed alongside the assessment redesign process.

Teaching

DCAD assessment guidance says actively addressing generative AI in assessment briefs can promote open dialogue with students and help assessments reflect programme learning outcomes and disciplinary practices.

Review: Agent reviewedConfidence94%

Original evidence

Evidence 1
Actively addressing genAI, whether implicitly (by designing assessments that focus on human abilities and development) or explicitly (by including genAI in assessment briefs), helps to promote open dialogue about these tools with students and to ensure that assessments reflect programme learning outcomes and disciplinary practices.

Teaching

DCAD assessment guidance says starting an iterative programme-level discussion about learning outcomes and generative AI is highly recommended rather than ignoring already occurring shifts.

Review: Agent reviewedConfidence93%

Original evidence

Evidence 1
However, starting this discussion as an iterative process amidst uncertainty is highly recommended versus ignoring the many shifts that have already occurred.

Candidate claims

0 machine or needs-review claim

Candidate claims are not final policy conclusions. They preserve source URL, source snapshot hash, evidence, confidence, and review state so the record can be audited before review.

Official sources

4 source attribution

Change log

Source-check timeline and diff-style claim/evidence preview.

View the public change record for this university, including source snapshot hashes, claim review states, and a diff-style preview of current source-backed evidence.

Last checkedMay 14, 2026Last changedMay 14, 2026Open change log

Corrections and missing evidence

Corrections create review tasks and do not directly change this public record.

If an official source is missing, stale, moved, blocked, or incorrectly summarized, submit a source URL, policy change report, or institution correction for review. Corrections must preserve source URLs, source language, original evidence, review state, and audit history.

Back to universities