privacy
Warwick's AI Information Compliance Policy says certain data, including personal or confidential material, University intellectual property, and some copyrighted or third-party data, must not be put into AI software without prior approval.
Open, evidence-backed AI policy records for public reuse.
Change log
Source-check timeline, source snapshot hashes, claim review state, and a diff-style preview of current source-backed claim evidence.
Current public record freshness and review state.
The University of Warwick currently has 10 source-backed claim records and 5 official source attributions. Latest tracked changed date: May 12, 2026.
This tracker is not legal advice, not academic integrity advice, and not an official university statement unless a linked source is the university's own official page.
Diff-style preview built from current public claim/evidence records. Full old/new source diffs require paired historical snapshots.
Inserted lines represent current public claim and evidence records in the source-backed dataset.
10 claim records
Warwick's AI Information Compliance Policy says certain data, including personal or confidential material, University intellectual property, and some copyrighted or third-party data, must not be put into AI software without prior approval.
Warwick's AI Information Compliance Policy says all new uses of AI tools, products, or services must go through appropriate procurement processes, regardless of cost.
Warwick's AI Information Compliance Policy covers everyone with a contractual or implied relationship with the University and all information processed by the University.
Warwick's responsible-use guidance frames responsible AI use as honest, ethical, transparent, human-accountable, safe, secure, and attentive to bias, fairness, inclusivity, and accessibility.
Warwick's AI in research guidance says researchers are responsible for misconduct-type practices involving AI, including improper handling of information or use of another person's ideas, even if such practices occur inadvertently through an AI tool.
Warwick's AI in research guidance says its principles apply to all researchers and researchers must consider AI-related research risks including integrity, information security, and accountability risks.
Warwick's assessment-design guidance says generative AI use in student submissions needs thoughtful support so responsible use and clear demonstration of human achievement are maintained.
Warwick's student-facing guidance tells students not to enter personal or confidential data into AI tools unless they understand what will happen to the data, and recommends Copilot chat with a Warwick account for that kind of data.
Warwick's student-facing guidance says students are required to state whether AI was used in the submission process and explain why, where, and how it was used.
Warwick's student-facing guidance says students may use AI only within requirements set out in assessment briefs and course handbooks, which may restrict or prohibit AI use.
5 source attributions
official_guidance checked May 12, 2026
official_guidance checked May 12, 2026
official_guidance checked May 12, 2026
official_guidance checked May 12, 2026
official_guidance checked May 12, 2026