academic_integrity
For Common Awards summative assessments, Durham guidance says students must not use generative AI to create substantive content that they present as their own creation.
Open, evidence-backed AI policy records for public reuse.
Change log
Source-check timeline, source snapshot hashes, claim review state, and a diff-style preview of current source-backed claim evidence.
Current public record freshness and review state.
Durham University currently has 11 source-backed claim records and 4 official source attributions. Latest tracked changed date: May 14, 2026.
This tracker is not legal advice, not academic integrity advice, and not an official university statement unless a linked source is the university's own official page.
Diff-style preview built from current public claim/evidence records. Full old/new source diffs require paired historical snapshots.
Inserted lines represent current public claim and evidence records in the source-backed dataset.
11 claim records
For Common Awards summative assessments, Durham guidance says students must not use generative AI to create substantive content that they present as their own creation.
Durham Common Awards AI academic-misconduct policy is scoped to students' use of generative AI in summative assessments on Common Awards modules.
For Common Awards students, Durham guidance says students must not provide generative AI with others' material unless it is public-domain material, permitted material, or protected from training use.
The Durham Common Awards page says its AI policy requires students to paste a completed AI declaration into summative assignments before submission.
Durham Global Opportunities guidance says using generative AI in Global Opportunities applications is unadvisable and may negatively affect an application.
Durham Common Awards guidance says some limited uses of generative AI do not count as academic misconduct if work remains the student's own, AI use is acknowledged where required, and caution is demonstrated.
Durham Global Opportunities guidance says asking an AI tool to proofread in British English would be appropriate where the original text was generated by the human applicant.
Durham's public DCAD generative-AI resources page lists an internal Institutional Policy on Generative Artificial Intelligence for Learning, Teaching and Assessment dated June 2025.
DCAD assessment guidance says marking criteria should be reviewed alongside assessment redesign in light of generative AI.
DCAD assessment guidance says actively addressing generative AI in assessment briefs can promote open dialogue with students and help assessments reflect programme learning outcomes and disciplinary practices.
DCAD assessment guidance says starting an iterative programme-level discussion about learning outcomes and generative AI is highly recommended rather than ignoring already occurring shifts.
4 source attributions
official_guidance checked May 14, 2026
official_guidance checked May 14, 2026
official_guidance checked May 14, 2026
official_guidance checked May 14, 2026