Change log

University of Sussex

Source-check timeline, source snapshot hashes, claim review state, and a diff-style preview of current source-backed claim evidence.

Change summary

Current public record freshness and review state.

University of Sussex currently has 6 source-backed claim records and 5 official source attributions. Latest tracked changed date: May 16, 2026.

This tracker is not legal advice, not academic integrity advice, and not an official university statement unless a linked source is the university's own official page.

Claim/evidence diff preview

Diff-style preview built from current public claim/evidence records. Full old/new source diffs require paired historical snapshots.

University of Sussex current policy evidence

Inserted lines represent current public claim and evidence records in the source-backed dataset.

+12-0
11 # University of Sussex AI policy record
2+academic_integrity: University of Sussex staff guidance says module convenors determine and communicate AI-use permissions via module Canvas sites and choose one of three assessment-level permissions: AI use prohibited, AI in an assistive role, or AI with an integral role.
3+Evidence (en, 53a19cdbabe2): It is up to module convenors to determine and communicate AI use permissions via module Canvas sites. For each assessment, choose one of three permitted levels of AI use: AI use is prohibited; AI can be used in an assistive role; AI has an integral role.
4+academic_integrity: University of Sussex student misconduct guidance includes unauthorized or inappropriate use of digital technologies including AI, and gives examples including AI use where prohibited and submitting permitted AI-generated work without required acknowledgement.
5+Evidence (en, 46969a5b3b29): Misuse of digital technologies includes artificial intelligence. Examples include: using AI or other digital tools, such as translation tools in an assessment where their use has been prohibited; submitting AI-generated work, where this is permitted, without required acknowledgment.
6+academic_integrity: University of Sussex staff guidance provides a prohibited-use assessment statement saying generative AI tools must not be used to generate materials or content for that assessment, while allowing other assistive technology for registered reasonable adjustments.
7+Evidence (en, 53a19cdbabe2): Generative AI tools must not be used to generate any materials or content for this assessment. The purpose and format of this assessment makes it inappropriate or impractical for AI tools to be used. Students registered with the Disability Advice team and in receipt of reasonable adjustments are still permitted to use other assistive technology as required.
8+privacy: University of Sussex staff guidance says staff and students can access a data-protected Microsoft Copilot with Sussex credentials, and says university data such as learning and teaching content should be used in Copilot rather than less protected AI tools.
9+Evidence (en, e1589674e62e): Being logged into Copilot with your Sussex account means that your data is protected. Your chat results won't saved or made available to Microsoft, meaning any data isn't passed outside of the organisation. This is in contrast to both the free version of Copilot and other AI tools which may not be protecting your data. If you are using university data in an AI tool, such as learning and teaching content, then ensure you use Copilot.
10+teaching: University of Sussex AI principles say decisions on whether AI use is permitted, not permitted, optional, or required in learning or assessment will be made explicit.
11+Evidence (en, 83f0e9da6cad): Whether AI use is permitted/not permitted, optional or required in learning or assessment will be made explicit.
12+academic_integrity: University of Sussex staff guidance suggests telling students that AI detection tools are fallible and cannot be relied upon.
13+Evidence (en, 68f3a05a00fe): Acknowledge that AI detection tools already exist, many much more sophisticated ones are in development and, predictably, a web-based sub-culture of ways to fool the detection systems is also growing. Explain that all are fallible and cannot be relied upon.

Claim changes

6 claim records

academic_integrity

University of Sussex staff guidance suggests telling students that AI detection tools are fallible and cannot be relied upon.

Review: Agent reviewedConfidence89%Evidence1Languagesen

teaching

University of Sussex AI principles say decisions on whether AI use is permitted, not permitted, optional, or required in learning or assessment will be made explicit.

Review: Agent reviewedConfidence90%Evidence1Languagesen

privacy

University of Sussex staff guidance says staff and students can access a data-protected Microsoft Copilot with Sussex credentials, and says university data such as learning and teaching content should be used in Copilot rather than less protected AI tools.

Review: Agent reviewedConfidence91%Evidence1Languagesen

academic_integrity

University of Sussex student misconduct guidance includes unauthorized or inappropriate use of digital technologies including AI, and gives examples including AI use where prohibited and submitting permitted AI-generated work without required acknowledgement.

Review: Agent reviewedConfidence94%Evidence1Languagesen

academic_integrity

University of Sussex staff guidance provides a prohibited-use assessment statement saying generative AI tools must not be used to generate materials or content for that assessment, while allowing other assistive technology for registered reasonable adjustments.

Review: Agent reviewedConfidence92%Evidence1Languagesen

academic_integrity

University of Sussex staff guidance says module convenors determine and communicate AI-use permissions via module Canvas sites and choose one of three assessment-level permissions: AI use prohibited, AI in an assistive role, or AI with an integral role.

Review: Agent reviewedConfidence94%Evidence1Languagesen

Source snapshots

5 source attributions