Theme

Which universities mention AI in exams or assessments?

Published claim records where the visible claim or original evidence mentions exams, tests, quizzes, proctoring, assessment rules, or assessment-related AI restrictions. This page surfaces existing public claim text and evidence context. It does not add new policy claims or infer rules that are not visible in the linked records.

ThemeAI in examsMatching records54Public JSON/api/public/v1/universities.json
54

matching university records

186

matching source-backed claims

191

evidence records

296

official sources on matching records

Citation-ready summary

Short answer for researchers, journalists, and AI answer engines.

University AI Policy Tracker currently indexes 54 public university records with 186 source-backed claims related to ai in exams, supported by 191 evidence records and 296 official source attributions. This page is a public dataset slice generated from promoted claim/evidence records; it does not create new policy conclusions. Original-language evidence remains canonical, and each linked university record exposes review state, confidence, source URLs, snapshot hashes, and public JSON.

Theme pages are search and citation aids over promoted public records. They are not official university statements, legal advice, academic integrity advice, or a new review decision.

Matching claim records

Visible claim and source context from public university records.

The University of New South Wales (UNSW Sydney)

23 matching claims from 7 official sources.

Last checkedMay 6, 2026Review: Agent reviewedPublic JSON
Claim typeOtherReview: Agent reviewed

UNSW uses a Levels of AI Assistance framework with six categories for assessments: No Assistance, Simple Editing Assistance, Planning or Design Assistance, Assistance with Attribution, Generative AI Software-based Assessments, and Not Applicable.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeOtherReview: Agent reviewed

UNSW defines six high-level categories for permitted AI use in assessments: No Assistance, Simple Editing Assistance, Planning/Design Assistance, Assistance with Attribution, Generative AI Software-based Assessments, and Not Applicable.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeOtherReview: Agent reviewed

Under UNSW's 'No Assistance' level, students are not permitted to use any generative AI tools, software, or service to search for or generate information or answers.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Australian National University (ANU)

14 matching claims from 12 official sources.

Last checkedMay 10, 2026Review: Agent reviewedPublic JSON
Claim typeSource StatusReview: Agent reviewed

ANU approved six institutional AI principles via Academic Board in June 2023, covering excellence/integrity, research engagement, clear guidance, AI literacy, access/privacy/security, and collaborative policy development.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAcademic IntegrityReview: Agent reviewed

Submitting AI-generated content as one's own work constitutes a breach of ANU's academic integrity rules.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typePrivacyReview: Agent reviewed

ANU academic staff are not permitted to upload student data or academic work to generative AI platforms.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

University of Oxford

8 matching claims from 6 official sources.

Last checkedMay 6, 2026Review: Agent reviewedPublic JSON
Claim typeAcademic IntegrityReview: Agent reviewed

Staff setting summative assessment must: declare whether/how students can use AI; review assessment design for alignment with permitted AI use; ensure equality of baseline AI tool provision where authorised; specify declaration forms for student AI use; only identify suspected unauthorised AI use through marking or university-endorsed detection tools (none currently endorsed); and handle misconduct under usual disciplinary regulations.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAcademic IntegrityReview: Agent reviewed

Students undertaking summative assessment must: complete assessment in line with the AI use declaration for each assignment; acknowledge their AI use via a formal declaration in the prescribed format; and understand that submitting work breaching AI specifications constitutes cheating and may constitute plagiarism, handled under usual disciplinary regulations.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAcademic IntegrityReview: Agent reviewed

The University's policy on AI use in summative assessment is based on three principles endorsed by Education Committee in Trinity term 2025: (1) educational practice must be grounded in values of integrity, honesty and transparency, which must be clearly articulated and frequently discussed; (2) every discrete unit of assessment must be carefully designed to be fit for its specific purposes, clearly articulated to students; (3) every summative assessment must be accompanied by a clear explanation of what appropriate assistance is permitted and what is forbidden, specifying how students should report assistance received.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

The University of Melbourne

7 matching claims from 5 official sources.

Last checkedMay 6, 2026Review: Agent reviewedPublic JSON
Claim typeOtherReview: Agent reviewed

At the University of Melbourne, using GenAI tools to produce work submitted for assessment without acknowledgement constitutes academic misconduct under cl. 4.13 of the Student Academic Integrity Policy (MPF1310).

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeOtherReview: Agent reviewed

Students must check with their Subject Coordinator before using GenAI for assessment-related work at the University of Melbourne.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeOtherReview: Agent reviewed

University of Melbourne assessment materials and teaching materials constitute University IP and should never be tested on third-party external GenAI platforms such as ChatGPT; any such testing must be done only within the University's secure SparkAI platform.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

The University of Sydney

7 matching claims from 8 official sources.

Last checkedMay 10, 2026Review: Agent reviewedPublic JSON
Claim typeTeachingReview: Agent reviewed

From Semester 1 2025, the default position in the University of Sydney Academic Integrity Policy has been reversed: except for supervised examinations and supervised in-semester tests, students may use automated writing tools or generative AI to complete assessments unless expressly prohibited by the unit coordinator.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAcademic IntegrityReview: Agent reviewed

The University of Sydney's Academic Integrity Policy 2022 states it is an academic integrity breach to inappropriately generate content using artificial intelligence to complete an assessment task, and submitting an assessment generated by AI may be considered contract cheating.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

The University of Sydney has adopted a 'two-lane approach' to assessment: Lane 1 comprises secure, in-person supervised assessments to assure learning, and Lane 2 comprises open assessments that support and scaffold the use of all available and relevant tools including generative AI.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

University of Cambridge

7 matching claims from 6 official sources.

Last checkedMay 5, 2026Review: Agent reviewedPublic JSON
Claim typeAcademic IntegrityReview: Agent reviewed

A student using any unacknowledged content generated by artificial intelligence within a summative assessment as though it is their own work constitutes academic misconduct, unless explicitly stated otherwise in the assessment brief.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeProcurementReview: Agent reviewed

The University's standard licensed GenAI tools are Microsoft 365 Copilot, Google Gemini, and Google NotebookLM. Use of other licensed GenAI tools is not prohibited but must be procured in accordance with applicable procurement policy, including completion of risk assessments such as DPIAs and/or ISRAs. The public, free versions of Copilot, Gemini and NotebookLM must not be used for University activities.

Evidence records: 2. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAcademic IntegrityReview: Agent reviewed

Staff should not rely on AI detection software as it is not proven to be accurate or reliable and provides no evidence to support investigations into the use of GenAI.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

King's College London

6 matching claims from 6 official sources.

Last checkedMay 10, 2026Review: Agent reviewedPublic JSON
Claim typeAcademic IntegrityReview: Agent reviewed

At King's College London, submitting AI-generated text as one's own without written departmental permission is considered misconduct under third-party involvement or text manipulation offences.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

King's College London supports considered use of generative AI and is open to evolving teaching, assessment and feedback practices according to need and disciplinary differences.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

King's College London subscribes to the Russell Group's five principles on generative AI in education, including supporting AI literacy, adapting teaching and assessment, and ensuring academic integrity.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

The Chinese University of Hong Kong (CUHK)

6 matching claims from 7 official sources.

Last checkedMay 10, 2026Review: Agent reviewedPublic JSON
Claim typeAcademic IntegrityReview: Agent reviewed

At CUHK, improper or unauthorized use of AI tools in learning activities and assessments constitutes academic dishonesty and is subject to penalties including failure grade, suspension, or termination of studies.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

CUHK defines four approaches to AI use in courses: (1) prohibit all use, (2) use only with prior permission, (3) use only with explicit acknowledgement, and (4) free use without acknowledgement requirement.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAcademic IntegrityReview: Agent reviewed

CUHK penalties for academic dishonesty involving AI tools may include reviewable/permanent demerits, failure grade, suspension, lowering degree classification, and termination of studies.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

The University of Auckland

6 matching claims from 7 official sources.

Last checkedMay 13, 2026Review: Agent reviewedPublic JSON
Claim typeAi Tool TreatmentReview: Agent reviewed

The University of Auckland's Assessment of Courses Procedures state that AI use in assessment tasks may only be restricted when the task is a controlled assessment, identified as Lane 1; AI may be used without restriction in other assessment tasks, identified as Lane 2.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

The University of Auckland's Assessment of Courses Procedures require courses to use the two-lane nomenclature, including telling students which assessments align with Lane 1 or Lane 2, and require courses and programmes to implement the two-lane approach in assessment design by 2027.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAcademic IntegrityReview: Agent reviewed

The University of Auckland's student AI advice states that AI has no agency, treats the student prompting an AI tool as the author, and says students are ultimately responsible for work submitted for assessment.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Columbia University

5 matching claims from 8 official sources.

Last checkedMay 10, 2026Review: Agent reviewedPublic JSON
Claim typeAcademic IntegrityReview: Agent reviewed

Columbia Law School prohibits generative AI use in exams, final papers, and for drafting any part of work submitted for credit, even if fully documented.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeSecurity ReviewReview: Agent reviewed

CUIMC requires a formal IT Risk Assessment review before deploying any locally installed AI models (LLM, NLP, ML) to evaluate security, privacy, and compliance risks.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

Teachers College provides five example syllabus statements ranging from no AI use permitted to generally permitted with attribution, allowing instructors to choose their stance.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Imperial College London

4 matching claims from 14 official sources.

Last checkedMay 6, 2026Review: Agent reviewedPublic JSON
Claim typeAcademic IntegrityReview: Agent reviewed

Unless explicitly authorised, using generative AI to create assessed work may be treated as an academic offence such as contract cheating under Imperial's Plagiarism, Academic Integrity & Exam Offences regulations. Improper use of AI can be investigated under the University's Academic Misconduct procedures.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

Individual departments at Imperial may allow or prohibit the use of generative AI for specific assessments. Local (team/department/faculty) instructions take precedence over university-wide guidance. Students should check their department's current policy on using and disclosing generative AI in academic work and follow their module leader's instructions.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeResearchReview: Agent reviewed

Research at Imperial that involves people, personal data, or sensitive topics may require ethics approval, a Data Protection Impact Assessment (DPIA), and data-governance controls before using any AI tool. Researchers must verify whether their use of AI in research requires special approval, particularly when uploading private or confidential research data.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Massachusetts Institute of Technology (MIT)

4 matching claims from 4 official sources.

Last checkedMay 6, 2026Review: Agent reviewedPublic JSON
Claim typeProcurementReview: Agent reviewed

IS&T recommends that MIT community members consult with IS&T before purchasing or using generative AI tools, and recommends using tools already licensed by IS&T for the MIT community.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeOtherReview: Agent reviewed

MIT maintains a list of approved generative AI tools licensed by IS&T for use by the MIT community. Only these tools are approved for use with low- and medium-risk information, and any tool not on the list requires contacting ai-guidance@mit.edu for assessment before use or purchase. No generative AI tools are approved for use with High Risk MIT information.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeOtherReview: Agent reviewed

MIT prohibits the use of generative AI for purposes that may require in-depth risk assessments without prior consultation with ai-guidance@mit.edu. Such purposes include recruitment and hiring of employees, evaluating student academic performance, making investment decisions, and complaint and dispute resolution.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Monash University

4 matching claims from 9 official sources.

Last checkedMay 10, 2026Review: Agent reviewedPublic JSON
Claim typeAcademic IntegrityReview: Agent reviewed

When allowed or required to use AI in an assessment, students must follow all instructions and restrictions on its use, clearly document the type of AI used and how it contributed, and provide written acknowledgment of the use of AI and its extent.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

Chief Examiners have overarching responsibility for designing and setting assessment conditions, including communicating and verifying the responsible use of AI within assessment tasks.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeResearchReview: Agent reviewed

Thesis examiners are not permitted to use Generative AI technologies (such as ChatGPT) during the thesis examination process to support, prepare, or write their examiners' report.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

The Hong Kong Polytechnic University

4 matching claims from 4 official sources.

Last checkedMay 12, 2026Review: Agent reviewedPublic JSON
Claim typeAi Tool TreatmentReview: Agent reviewed

PolyU takes an open and forward-looking stance on the use of GenAI tools as a positive and creative force in education, and expects that the usage of generative AI will become a normal part of learning, teaching, and assessment from 2023/24 Semester One.

Evidence records: 2. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

PolyU guidelines state that work submitted for assessment must be the student's own work and must not be a copy or version of other people's work or AI-generated material.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAcademic IntegrityReview: Agent reviewed

PolyU states that while it embraces the use of GenAI tools in education, students must adhere to high standards of academic integrity in all forms of assessments.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

California Institute of Technology (Caltech)

3 matching claims from 2 official sources.

Last checkedMay 5, 2026Review: Agent reviewedPublic JSON
Claim typeTeachingReview: Agent reviewed

Caltech admissions prohibits applicants from copying and pasting directly from an AI generator, relying on AI-generated content to outline or draft essays, replacing their unique voice with AI-generated content, or translating essays via AI.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

Caltech admissions permits applicants to use AI tools like Grammarly or Microsoft Editor for grammar and spelling review of completed essays, to generate brainstorming questions or exercises, and to research the college application process.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

The Caltech HSS generative AI policy applies to all assignments including major papers, exams, discussion board posts, reflections, and problem sets.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

ETH Zurich

3 matching claims from 5 official sources.

Last checkedMay 5, 2026Review: Agent reviewedPublic JSON
Claim typeAcademic IntegrityReview: Agent reviewed

Students are responsible for the content of work they submit. Performance assessments must be conducted independently and personally; GenAI may serve a supplementary role but not replace student efforts.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

Lecturers determine whether and how GenAI may be used in their courses and for respective assessments. Teaching materials created with GenAI must be subjected to quality control by the lecturer.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAcademic IntegrityReview: Agent reviewed

Violations of GenAI guidelines such as use of unauthorised aids or non-disclosure of their use are subject to disciplinary action under existing performance assessment rules and the declaration of originality.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Korea University

3 matching claims from 3 official sources.

Last checkedMay 12, 2026Review: Agent reviewedPublic JSON
Claim typePrivacyReview: Agent reviewed

Korea University tells instructors not to enter personal information, academic records, assessment questions, or other sensitive or non-public materials into AI tools, with special caution for external AI services.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typePrivacyReview: Agent reviewed

Korea University tells learners not to enter personal information, non-public learning materials, or assessment questions into external AI tools, and to remember that AI inputs may be stored or reused.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

Korea University recommends designing assignments and assessments to show learners’ critical thinking, creativity, problem-solving process, and their own reasoning even when AI is used.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Ludwig-Maximilians-Universität München

3 matching claims from 3 official sources.

Last checkedMay 12, 2026Review: Agent reviewedPublic JSON
Claim typeAi Tool TreatmentReview: Agent reviewed

LMU IfKW student guidance says AI may be used in assessments only with explicit teacher permission; if no explicit permission is given, students must assume AI use is not allowed.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAcademic IntegrityReview: Agent reviewed

LMU IfKW guidance treats verbatim or minimally changed AI-generated text without proper attribution as plagiarism, and says significant unattributed AI-generated text in assessed work can receive grade 5 (failed).

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

LMU teaching guidance recommends adapting e-exam questions for ChatGPT-era assessment, including tasks that require critical reflection on ChatGPT limitations rather than simple knowledge or comprehension questions.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

National University of Singapore (NUS)

3 matching claims from 3 official sources.

Last checkedMay 6, 2026Review: Agent reviewedPublic JSON
Claim typeTeachingReview: Agent reviewed

NUS states that instructors should be transparent about where and how they deploy AI in courses, including for generating content, virtual tutoring, and assessment feedback.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

NUS requires prior approval from Head of Department or relevant Deanery before using AI tools to provide instruction, feedback, or marks to students, submitted via an AI Risk Assessment.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

NUS policy sets the default assumption that AI tool use is permitted for unsupervised (take-home) assessments, provided use is duly acknowledged; assessments forbidding AI must be conducted in-person and instructor-supervised.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Stanford University

3 matching claims from 13 official sources.

Last checkedMay 6, 2026Review: Agent reviewedPublic JSON
Claim typeTeachingReview: Agent reviewed

For Stanford Graduate School of Business (GSB) MBA and MSx courses, instructors may not ban student use of AI tools for take-home coursework, including assignments and exams. Instructors may choose whether to allow AI for in-class work. For PhD and undergraduate courses, GSB follows the university-wide Generative AI Policy Guidance from the Office of Community Standards.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

Stanford School of Medicine MD and MSPA programs have a formal AI policy: students may use AI for learning, clarification, and grammar/style editing unless contrary to assignment instructions. AI use for closed-book exams or assignments where internet is restricted is prohibited unless explicitly authorized by faculty. Students are responsible for all AI-generated content they submit, must disclose and cite substantial AI contributions, and violations may result in disciplinary action.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

Stanford Law School instructors set their own AI policies; in the absence of a course-specific policy, students may use generative AI to support learning and develop or refine their own ideas, but may not use AI to generate content presented as their own work. Using AI during an exam or to draft/revise submitted work is not permitted unless disclosed in advance and explicitly authorized in writing by the instructor. Unauthorized use may result in an F grade and/or referral to Stanford's Office of Community Standards.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

The University of Queensland

3 matching claims from 5 official sources.

Last checkedMay 10, 2026Review: Agent reviewedPublic JSON
Claim typeTeachingReview: Agent reviewed

UQ course profiles must clearly state if, when, and how AI (including Machine Translation) is allowed. Two options exist: Option 1 prohibits AI in in-person assessment; Option 2 permits AI use with mandatory referencing.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAcademic IntegrityReview: Agent reviewed

UQ has disabled the Turnitin AI writing indicator functionality for all assessments from Semester 2, 2025, citing that AI detection tools are flawed and unreliable.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

UQ says students must acknowledge where they used AI in assessment, including direct quotes or paraphrases of AI-generated content and use of AI tools for summarising, brainstorming, planning, editing, or proofreading.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

UCL

3 matching claims from 2 official sources.

Last checkedMay 5, 2026Review: Agent reviewedPublic JSON
Claim typeAcademic IntegrityReview: Agent reviewed

UCL uses a 3-category assessment framework for GenAI: Category 1 requires own work only; Category 2 permits GenAI with acknowledgement; Category 3 includes essential GenAI use as part of the assessment.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAcademic IntegrityReview: Agent reviewed

UCL defines academic misconduct in the context of GenAI as gaining an unfair advantage over other students; there is no single list of fair and unfair uses as it depends on the assessment category.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAcademic IntegrityReview: Agent reviewed

UCL permits using GenAI to help with spelling, grammar, and language tone in assessments, but it must not change the content and meaning of what the student has written.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Université Paris-Saclay

3 matching claims from 3 official sources.

Last checkedMay 12, 2026Review: Agent reviewedPublic JSON
Claim typeAcademic IntegrityReview: Agent reviewed

Université Paris-Saclay's 2025-2026 first-cycle exam rules say that, for Licence professionnelle, Licence, and Licence double-diplôme students covered by the rules, use of ChatGPT or another AI tool must be explicitly mentioned when it is not prohibited, and failure to mention AI as a source will be sanctioned.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAcademic IntegrityReview: Agent reviewed

Université Paris-Saclay's 2025-2026 master exam rules say that use of ChatGPT or another AI tool must be explicitly mentioned when it is not prohibited, like any external source borrowing or citation, and failure to mention AI as a source will be sanctioned.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

In the Graduate School Droit context, the Université Paris-Saclay IAG working-group article says a guide of good practices and evaluation support for law teacher-researchers is being drafted.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

University of Bristol

3 matching claims from 4 official sources.

Last checkedMay 12, 2026Review: Agent reviewedPublic JSON
Claim typeResearchReview: Agent reviewed

PGR students at University of Bristol are not permitted to use generative AI tools such as ChatGPT to write any text used in their thesis or APM reports, as research degree students must demonstrate ability to write about research in their own words.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAcademic IntegrityReview: Agent reviewed

University of Bristol considers the use of AI or translation tools to be cheating if used for more than generating the occasional short phrase within a sentence or checking basic grammar and spelling, unless assessment instructions allow more comprehensive use.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

University of Bristol uses a four-category system for AI use in assessments: Category 1 (prohibited - no AI use), Category 2 (minimal - spelling/grammar only, default), Category 3 (selective - certain tasks as specified), and Category 4 (integral - AI required for assessment).

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

University of British Columbia

3 matching claims from 10 official sources.

Last checkedMay 10, 2026Review: Agent reviewedPublic JSON
Claim typeAcademic IntegrityReview: Agent reviewed

Students may only use GenAI for assessed work (assignments, exams, projects, theses) if expressly permitted by their instructor, supervisor, or program.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typePrivacyReview: Agent reviewed

UBC says instructors or teaching assistants cannot require students to use GenAI or any other technology tool that requires sharing personal information unless the tool has undergone a UBC Privacy Impact Assessment review and been approved for use with personal information.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typePrivacyReview: Agent reviewed

Do not enter personal information into any generative AI tool that has not been through UBC's FIPPA compliance assessment (PIA), as to do so may be a breach of privacy.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

University of California, Berkeley (UCB)

3 matching claims from 5 official sources.

Last checkedMay 6, 2026Review: Agent reviewedPublic JSON
Claim typeOtherReview: Agent reviewed

The UC Berkeley Academic Senate recommends that for assignments where GenAI is not permitted, instructors should adopt enforcement mechanisms such as in-person proctored exams, an additional oral exam component, or a written statement of academic integrity, since no validated GenAI detection tools exist.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeOtherReview: Agent reviewed

UC Berkeley warns that AI use involving highly-consequential automated decision-making requires extreme caution and should not be employed without prior consultation with appropriate campus entities including the responsible unit head. Examples include legal analysis, recruitment/personnel decisions, replacing represented employees, facial recognition security tools, and grading or assessment of student work.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeOtherReview: Agent reviewed

UC Berkeley has AI risk assessment pre-screening questions that employees can use to gauge the level of risk involved for an AI use case where AI is integrated into a product, service, or feature at the university. Depending on the risk level determined, the CERC-AIR subcommittee may be engaged for a broader risk assessment.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

University of Pennsylvania

3 matching claims from 6 official sources.

Last checkedMay 6, 2026Review: Agent reviewedPublic JSON
Claim typeOtherReview: Agent reviewed

Penn researchers should obtain IRB approvals prior to exposing research participant data to AI tools and should exercise caution when research involves high-risk data including PII and health information.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeOtherReview: Agent reviewed

In the absence of other guidance, Penn students should treat the use of AI as they would treat assistance from another person — if it is unacceptable to have another person substantially complete a task like writing an essay, it is also unacceptable to have AI complete the task.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeOtherReview: Agent reviewed

Wharton Academy prohibits students from using AI to complete personal reflection or opinion-based tasks, from using AI to complete group assignments instead of collaborating with peers, and from using AI to cheat on exams or tests.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Cornell University

2 matching claims from 6 official sources.

Last checkedMay 6, 2026Review: Agent reviewedPublic JSON
Claim typeOtherReview: Agent reviewed

Cornell's committee report states that any information educators are obligated to keep private under FERPA or HIPAA should not be shared with generative AI tools or uploaded to third-party AI vendors.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeOtherReview: Agent reviewed

Cornell's committee report does not recommend the use of generative AI for summative evaluation or grading of student work, stating that evaluation and grading is among the most important tasks entrusted to faculty.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

EPFL – École polytechnique fédérale de Lausanne

2 matching claims from 6 official sources.

Last checkedMay 10, 2026Review: Agent reviewedPublic JSON
Claim typeAcademic IntegrityReview: Agent reviewed

EPFL requires students to disclose the use of AI tools in assessment work. EPFL rules (Lex 1.3.3, Article 4) require that all assessment material that is not the student's personal and original contribution must be recognizable as such.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

EPFL recommends that teachers make explicit to students what AI use is not legitimate in a course and what rules accompany AI tool use.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Institut Polytechnique de Paris

2 matching claims from 3 official sources.

Last checkedMay 10, 2026Review: Agent reviewedPublic JSON
Claim typeAcademic IntegrityReview: Agent reviewed

Institut Polytechnique de Paris's 2025-2026 Master programs academic regulations prohibit the use of generative AI in assessments for those programs unless explicitly authorized by the instructor in written instructions. Unauthorized use constitutes academic misconduct.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAcademic IntegrityReview: Agent reviewed

Le règlement des études 2025-2026 des masters de l'Institut Polytechnique de Paris interdit l'utilisation de l'intelligence artificielle générative dans les évaluations de ces programmes, sauf autorisation explicite de l'enseignant dans ses consignes écrites. Tout manquement est considéré comme une fraude.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

KU Leuven

2 matching claims from 5 official sources.

Last checkedMay 12, 2026Review: Agent reviewedPublic JSON
Claim typeAcademic IntegrityReview: Agent reviewed

KU Leuven student guidance says clear misuse of GenAI, where output is largely generated by GenAI and the student is not transparent about tool use, can be considered an irregularity under Article 84 of the Education and Examination Regulations.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

KU Leuven teaching guidance expects teaching staff to clearly inform students whether GenAI may be used for assignments and expects students to be transparent about GenAI use so assessment can be fair and correct.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Lund University

2 matching claims from 5 official sources.

Last checkedMay 13, 2026Review: Agent reviewedPublic JSON
Claim typeAcademic IntegrityReview: Agent reviewed

Lund University's student guidance says students who want to use GenAI for a compulsory assignment or examination must check whether it is permitted and how to report its use; presenting GenAI-generated work as one's own may be treated as cheating.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typePrivacyReview: Agent reviewed

Lund University's student guidance says students should primarily use Lund-licensed tools such as Microsoft Copilot Chat and Google Gemini, and must not upload other students' work, sensitive personal data, or copyright-protected material.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

McGill University

2 matching claims from 5 official sources.

Last checkedMay 11, 2026Review: Agent reviewedPublic JSON
Claim typePrivacyReview: Agent reviewed

McGill guidance says users should mitigate potential privacy concerns by removing personally identifying information when using AI tools, be careful with sensitive or restricted material, and avoid using Personal Health Information (PHI) or Payment Card Industry (PCI) data with AI tools.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAcademic IntegrityReview: Agent reviewed

McGill's Provost-endorsed principles state that instructors remain responsible for comporting themselves according to the highest standards of academic integrity in their use of generative AI tools. Instructors must be explicit in course outlines about the expectations for use of generative AI tools and may set limits on their use in assessment tasks.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Sorbonne University

2 matching claims from 3 official sources.

Last checkedMay 13, 2026Review: Agent reviewedPublic JSON
Claim typeAcademic IntegrityReview: Agent reviewed

Sorbonne University's 2024-2025 assessment rules state that assessment documents must be the student's or assessed group's personal work, AI use is refused unless explicitly authorized, and authorized AI use should mention the source.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAcademic IntegrityReview: Agent reviewed

Sorbonne University's 2024-2025 assessment rules treat unauthorized AI-generated work presented as one's own, or authorized AI use without source mention, as plagiarism.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Technical University of Munich

2 matching claims from 1 official source.

Last checkedMay 10, 2026Review: Agent reviewedPublic JSON
Claim typeTeachingReview: Agent reviewed

TUM ProLehre guidance recommends starting AI-use decisions from the intended learning outcomes and whether AI use supports, complements, or hinders those competencies.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAcademic IntegrityReview: Agent reviewed

TUM ProLehre guidance says reliable control of AI use is difficult to impossible, and recommends designing assessments so unauthorized AI use does not provide a decisive advantage.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

The London School of Economics and Political Science (LSE)

2 matching claims from 6 official sources.

Last checkedMay 12, 2026Review: Agent reviewedPublic JSON
Claim typeAi Tool TreatmentReview: Agent reviewed

LSE requires departments or course convenors to classify authorised generative AI use in assessment as no authorised use, limited authorised use, or full authorised use, and to communicate the position to students.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAcademic IntegrityReview: Agent reviewed

For 2025/26, LSE requested departments to add assessment safeguards, including observed assessment methods, to help assure degree integrity and prevent unfair competitive advantage from generative AI.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

The University of Manchester

2 matching claims from 6 official sources.

Last checkedMay 10, 2026Review: Agent reviewedPublic JSON
Claim typeAcademic IntegrityReview: Agent reviewed

Tools to detect AI-generated content are unreliable and biased and cannot be relied on to identify academic malpractice in summative assessment at Manchester. Output from such tools cannot currently be used as evidence of malpractice.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

Using an AI tool to correct grammar or spelling is acceptable at Manchester, but students should ensure that use of the tool does not result in substantive changes to the content or meaning of their work.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

The University of Tokyo

2 matching claims from 6 official sources.

Last checkedMay 9, 2026Review: Agent reviewedPublic JSON
Claim typePrivacyReview: Agent reviewed

UTokyo instructs faculty not to input exam questions directly into generative AI tools, as exams are highly confidential documents.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

UTokyo advises faculty to test their own assignments with generative AI tools to understand how well AI can complete them, and use this understanding to inform assessment design.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

The University of Warwick

2 matching claims from 5 official sources.

Last checkedMay 12, 2026Review: Agent reviewedPublic JSON
Claim typeAcademic IntegrityReview: Agent reviewed

Warwick's student-facing guidance says students may use AI only within requirements set out in assessment briefs and course handbooks, which may restrict or prohibit AI use.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

Warwick's assessment-design guidance says generative AI use in student submissions needs thoughtful support so responsible use and clear demonstration of human achievement are maintained.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Universiti Malaya (UM)

2 matching claims from 3 official sources.

Last checkedMay 12, 2026Review: Agent reviewedPublic JSON
Claim typeTeachingReview: Agent reviewed

Universiti Malaya guidance says lecturers specify the permitted level of AI use for each assignment or assessment, using levels from no AI use through integrated AI use.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAcademic IntegrityReview: Agent reviewed

Universiti Malaya guidance requires students to declare AI tools used in assignments or assessments and says failure to disclose AI use may be considered academic misconduct.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

University of California, San Diego (UCSD)

2 matching claims from 6 official sources.

Last checkedMay 13, 2026Review: Agent reviewedPublic JSON
Claim typeAcademic IntegrityReview: Agent reviewed

UC San Diego's Academic Integrity Policy says students may not let academic work or academic credit be completed for them by another human or by machine/artificial intelligence, and may not use unauthorized aids including artificial intelligence in coursework or assessments.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAcademic IntegrityReview: Agent reviewed

UC San Diego Academic Integrity Office student guidance says that if an instructor has not said a student can use GenAI for a class or assessment, the student cannot use it; silence does not equal permission.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

University of Illinois Urbana-Champaign

2 matching claims from 6 official sources.

Last checkedMay 13, 2026Review: Agent reviewedPublic JSON
Claim typeSecurity ReviewReview: Agent reviewed

Illinois Enterprise GenAI guidance for service providers includes security controls, audits, MFA, and data privacy compliance for AI systems and sensitive data.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeResearchReview: Agent reviewed

The Illinois Graduate College states it does not have a policy on permissibility of generative AI in doctoral milestones, and encourages programs and committees to communicate their expectations.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Duke University

1 matching claim from 6 official sources.

Last checkedMay 12, 2026Review: Agent reviewedPublic JSON
Claim typeAcademic IntegrityReview: Agent reviewed

Duke Community Standard academic-dishonesty guidance includes unauthorized use of artificial intelligence software among examples of cheating-related conduct.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Kyoto University

1 matching claim from 3 official sources.

Last checkedMay 12, 2026Review: Agent reviewedPublic JSON
Claim typeTeachingReview: Agent reviewed

Kyoto University expects instructors to state their generative AI use policy to students and, for courses focused on basic knowledge or skills, to preserve grading fairness through checks such as written or oral examinations.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

National Taiwan University (NTU)

1 matching claim from 4 official sources.

Last checkedMay 12, 2026Review: Agent reviewedPublic JSON
Claim typeTeachingReview: Agent reviewed

National Taiwan University guidance says it takes a positive and constructive view of AI tools, encourages teachers to adjust course planning and learning assessment, and says students should understand AI-tool limitations for future learning.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Peking University

1 matching claim from 1 official source.

Last checkedMay 6, 2026Review: Agent reviewedPublic JSON
Claim typeOtherReview: Agent reviewed

Peking University's AI Scientific Integrity Platform synthesizes AI use policies from 18 domestic and international sources, including Chinese government agencies (MOST, NSFC), Chinese universities (Fudan, Nanjing, Sichuan), international bodies (EU Commission, NIH), and universities (Harvard, Yale, Cambridge, UCL, Oxford, MIT).

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

The Hong Kong University of Science and Technology

1 matching claim from 7 official sources.

Last checkedMay 10, 2026Review: Agent reviewedPublic JSON
Claim typeAcademic IntegrityReview: Agent reviewed

HKUST assessment policies require course syllabi to clearly present policies on the use of Generative AI tools and academic integrity.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

The University of Edinburgh

1 matching claim from 6 official sources.

Last checkedMay 10, 2026Review: Agent reviewedPublic JSON
Claim typeAcademic IntegrityReview: Agent reviewed

At the University of Edinburgh, presenting AI outputs as your own original work, submitting AI-generated text without acknowledgment, and using AI agents within university learning platforms constitute academic misconduct.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

The University of Hong Kong

1 matching claim from 3 official sources.

Last checkedMay 5, 2026Review: Agent reviewedPublic JSON
Claim typeResearchReview: Agent reviewed

HKU states that researchers should clearly disclose generative AI tool usage in research outputs, publications, and presentations, including the type of GenAI used, data sources, and potential limitations.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

University of Amsterdam

1 matching claim from 5 official sources.

Last checkedMay 11, 2026Review: Agent reviewedPublic JSON
Claim typeAi Tool TreatmentReview: Agent reviewed

The UvA and VU task force on AI in education has produced criteria for software to ensure academic integrity, and expects students to be transparent about how they have applied generative AI in their own learning and work.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

University of California, Los Angeles (UCLA)

1 matching claim from 3 official sources.

Last checkedMay 11, 2026Review: Agent reviewedPublic JSON
Claim typeAcademic IntegrityReview: Agent reviewed

UCLA Academic Senate guidance quotes the Student Conduct Code requirement that submissions must be the student’s own work or clearly acknowledge the source, and says that unless an instructor indicates otherwise, use of ChatGPT or other AI tools for course assignments is akin to receiving assistance from another person.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

University of Chicago

1 matching claim from 4 official sources.

Last checkedMay 5, 2026Review: Agent reviewedPublic JSON
Claim typeAi Tool TreatmentReview: Agent reviewed

The University of Chicago provides a central hub at genai.uchicago.edu for information on generative AI tools, training, resources, and guidance for the university community.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

University of Michigan-Ann Arbor

1 matching claim from 8 official sources.

Last checkedMay 10, 2026Review: Agent reviewedPublic JSON
Claim typeTeachingReview: Agent reviewed

U-M leaves GenAI policy to individual instructors, who may allow, restrict, or forbid AI use in their courses. Course policies should be clearly articulated in syllabi.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

University of Toronto

1 matching claim from 7 official sources.

Last checkedMay 11, 2026Review: Agent reviewedPublic JSON
Claim typeAcademic IntegrityReview: Agent reviewed

University of Toronto considers use of generative AI tools on marked assessments without instructor permission to be use of an unauthorized aid under the Code of Behaviour on Academic Matters.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Data and advice boundaries

Theme pages expose index slices, not new conclusions.

  • Public pages and public JSON should remain consistent because both are built from the promoted public release dataset.
  • Original-language evidence is canonical. Translations and display summaries are auxiliary.
  • Confidence is separate from reviewState; reviewState describes workflow status.
  • Tracker metadata is open licensed. Official source documents, page text, PDFs, and other source materials retain their original rights and terms.
  • This tracker is not legal advice, not academic integrity advice, and not an official university statement unless a linked source is the university's own official page.
  • Theme matching is based on visible public claim and evidence text; it is not a new review decision.

Browse all records at /universities or inspect the dataset at /api/public/v1/universities.json.