Brighton, United Kingdom

University of Sussex

University of Sussex is listed as QS 2026 rank 278. University of Sussex has 6 source-backed AI policy claim records from 5 official source attributions. The public record preserves original-language evidence snippets, source URLs, snapshot hashes, confidence, and review state.

Short answer

v1 public contract

University of Sussex is listed as QS 2026 rank 278. University of Sussex has 6 source-backed AI policy claim records from 5 official source attributions. The public record preserves original-language evidence snippets, source URLs, snapshot hashes, confidence, and review state.

Citation-ready summary

As of this public record, University AI Policy Tracker lists University of Sussex as an agent-reviewed AI policy record last checked on May 16, 2026 and last changed on May 16, 2026. The record contains 6 source-backed claims, including 6 reviewed claims, from 5 official source attributions. Original-language evidence snippets and source URLs remain canonical, with public JSON available at https://eduaipolicy.org/api/public/v1/universities/university-of-sussex.json. The entity-level confidence is 94%. This tracker is not legal advice, not academic integrity advice, and not an official university statement unless the linked source is the university's own official page.

Claim coverage6 reviewedSource languageenPublic JSON/api/public/v1/universities/university-of-sussex.json

Policy signals in this record

  • Evidence includes Academic integrity claims.
  • Evidence includes Privacy claims.
  • Evidence includes Teaching claims.
  • Named AI services detected in public claims: Microsoft Copilot.
  • Disclosure, acknowledgment, citation, or attribution language appears in the public claim text.
  • Teaching, assessment, coursework, or syllabus-related language appears in the public claim text.
Policy statusReviewed evidence-backed recordReview: Agent reviewedEvidence-backed claims6Reviewed6Candidate0Official sources5

This reference record summarizes visible public data only. Official sources and original-language evidence remain canonical; confidence is separate from review state.

This page is not legal advice, not academic integrity advice, and not an official university statement unless a linked source is the university's own official page.

Policy profile

Deterministic source-backed dimensions derived from this record's public claims.

Coverage score90/100Coverage labelbroad public coverageReview: Machine candidateAnalysis confidence78%

Policy profile rows are machine-candidate derived metadata. They are not final policy conclusions; inspect the linked claim evidence before reuse.

Analysis page-quality metadata is available at /api/public/v1/analysis/page-quality.json.

AI disclosure

University of Sussex has 1 source-backed public claim for ai disclosure; deterministic analysis status: required.

RequiredMachine candidateConfidence80%Evidence1Sources1

Privacy and data entry

University of Sussex has 1 source-backed public claim for privacy and data entry; deterministic analysis status: recommended.

RecommendedMachine candidateConfidence77%Evidence1Sources1

Approved tools

No source-backed public claim identifying approved or licensed AI tools is present in this profile.

The current public tracker record does not contain claim evidence that identifies institutionally approved, licensed, procured, or enterprise AI tools.

Not MentionedMachine candidateConfidence0%Evidence0Sources0

Named AI services

University of Sussex has 1 source-backed public claim for named ai services; deterministic analysis status: recommended.

RecommendedMachine candidateConfidence77%Evidence1Sources1

Research guidance

No source-backed public claim about research AI use is present in this profile.

The current public tracker record does not contain claim evidence about research use, publication ethics, research data, grants, or human-subjects compliance.

Not MentionedMachine candidateConfidence0%Evidence0Sources0

Security and procurement

No source-backed public claim about AI security review or procurement is present in this profile.

The current public tracker record does not contain claim evidence about security review, procurement, vendor approval, risk assessment, authentication, SSO, or enterprise licensing.

Not MentionedMachine candidateConfidence0%Evidence0Sources0

Coverage score measures breadth of public, source-backed coverage only. It is not a policy quality, strictness, legal adequacy, safety, or compliance score.

Evidence-backed claims

6 reviewed evidence-backed public claim

Academic Integrity

University of Sussex staff guidance says module convenors determine and communicate AI-use permissions via module Canvas sites and choose one of three assessment-level permissions: AI use prohibited, AI in an assistive role, or AI with an integral role.

Review: Agent reviewedConfidence94%

Normalized value: module_convenors_set_three_ai_permission_levels

Original evidence

Evidence 1
It is up to module convenors to determine and communicate AI use permissions via module Canvas sites. For each assessment, choose one of three permitted levels of AI use: AI use is prohibited; AI can be used in an assistive role; AI has an integral role.

Academic Integrity

University of Sussex student misconduct guidance includes unauthorized or inappropriate use of digital technologies including AI, and gives examples including AI use where prohibited and submitting permitted AI-generated work without required acknowledgement.

Review: Agent reviewedConfidence94%

Normalized value: ai_misuse_can_be_academic_misconduct

Original evidence

Evidence 1
Misuse of digital technologies includes artificial intelligence. Examples include: using AI or other digital tools, such as translation tools in an assessment where their use has been prohibited; submitting AI-generated work, where this is permitted, without required acknowledgment.

Academic Integrity

University of Sussex staff guidance provides a prohibited-use assessment statement saying generative AI tools must not be used to generate materials or content for that assessment, while allowing other assistive technology for registered reasonable adjustments.

Review: Agent reviewedConfidence92%

Normalized value: prohibited_assessment_statement_no_generative_ai_content

Original evidence

Evidence 1
Generative AI tools must not be used to generate any materials or content for this assessment. The purpose and format of this assessment makes it inappropriate or impractical for AI tools to be used. Students registered with the Disability Advice team and in receipt of reasonable adjustments are still permitted to use other assistive technology as required.

Privacy

University of Sussex staff guidance says staff and students can access a data-protected Microsoft Copilot with Sussex credentials, and says university data such as learning and teaching content should be used in Copilot rather than less protected AI tools.

Review: Agent reviewedConfidence91%

Normalized value: sussex_copilot_preferred_for_university_data

Original evidence

Evidence 1
Being logged into Copilot with your Sussex account means that your data is protected. Your chat results won't saved or made available to Microsoft, meaning any data isn't passed outside of the organisation. This is in contrast to both the free version of Copilot and other AI tools which may not be protecting your data. If you are using university data in an AI tool, such as learning and teaching content, then ensure you use Copilot.

Teaching

University of Sussex AI principles say decisions on whether AI use is permitted, not permitted, optional, or required in learning or assessment will be made explicit.

Review: Agent reviewedConfidence90%

Normalized value: ai_assessment_permissions_will_be_explicit

Original evidence

Evidence 1
Whether AI use is permitted/not permitted, optional or required in learning or assessment will be made explicit.

Academic Integrity

University of Sussex staff guidance suggests telling students that AI detection tools are fallible and cannot be relied upon.

Review: Agent reviewedConfidence89%

Normalized value: ai_detection_tools_fallible_guidance_for_student_discussion

Original evidence

Evidence 1
Acknowledge that AI detection tools already exist, many much more sophisticated ones are in development and, predictably, a web-based sub-culture of ways to fool the detection systems is also growing. Explain that all are fallible and cannot be relied upon.

Candidate claims

0 machine or needs-review claim

Candidate claims are not final policy conclusions. They preserve source URL, source snapshot hash, evidence, confidence, and review state so the record can be audited before review.

Official sources

5 source attribution

Change log

Source-check timeline and diff-style claim/evidence preview.

View the public change record for this university, including source snapshot hashes, claim review states, and a diff-style preview of current source-backed evidence.

Last checkedMay 16, 2026Last changedMay 16, 2026Open change log

Corrections and missing evidence

Corrections create review tasks and do not directly change this public record.

If an official source is missing, stale, moved, blocked, or incorrectly summarized, submit a source URL, policy change report, or institution correction for review. Corrections must preserve source URLs, source language, original evidence, review state, and audit history.

Back to universities