Theme

Which universities mention ChatGPT or AI in coursework?

Published claim records where the visible claim or original evidence mentions ChatGPT, OpenAI, GPT, coursework, assignments, syllabi, teaching, classroom use, or assessment context. This page surfaces existing public claim text and evidence context. It does not add new policy claims or infer rules that are not visible in the linked records.

ThemeChatGPT courseworkMatching records71Public JSON/api/public/v1/universities.json
71

matching university records

381

matching source-backed claims

390

evidence records

379

official sources on matching records

Citation-ready summary

Short answer for researchers, journalists, and AI answer engines.

University AI Policy Tracker currently indexes 71 public university records with 381 source-backed claims related to chatgpt coursework, supported by 390 evidence records and 379 official source attributions. This page is a public dataset slice generated from promoted claim/evidence records; it does not create new policy conclusions. Original-language evidence remains canonical, and each linked university record exposes review state, confidence, source URLs, snapshot hashes, and public JSON.

Theme pages are search and citation aids over promoted public records. They are not official university statements, legal advice, academic integrity advice, or a new review decision.

Matching claim records

Visible claim and source context from public university records.

The University of New South Wales (UNSW Sydney)

24 matching claims from 7 official sources.

Last checkedMay 6, 2026Review: Agent reviewedPublic JSON
Claim typeOtherReview: Agent reviewed

UNSW uses a Levels of AI Assistance framework with six categories for assessments: No Assistance, Simple Editing Assistance, Planning or Design Assistance, Assistance with Attribution, Generative AI Software-based Assessments, and Not Applicable.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeOtherReview: Agent reviewed

UNSW defines six high-level categories for permitted AI use in assessments: No Assistance, Simple Editing Assistance, Planning/Design Assistance, Assistance with Attribution, Generative AI Software-based Assessments, and Not Applicable.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeOtherReview: Agent reviewed

Under UNSW's 'No Assistance' level, students are not permitted to use any generative AI tools, software, or service to search for or generate information or answers.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Australian National University (ANU)

19 matching claims from 12 official sources.

Last checkedMay 10, 2026Review: Agent reviewedPublic JSON
Claim typeSource StatusReview: Agent reviewed

ANU approved six institutional AI principles via Academic Board in June 2023, covering excellence/integrity, research engagement, clear guidance, AI literacy, access/privacy/security, and collaborative policy development.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAcademic IntegrityReview: Agent reviewed

Submitting AI-generated content as one's own work constitutes a breach of ANU's academic integrity rules.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typePrivacyReview: Agent reviewed

ANU academic staff are not permitted to upload student data or academic work to generative AI platforms.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Cornell University

18 matching claims from 6 official sources.

Last checkedMay 6, 2026Review: Agent reviewedPublic JSON
Claim typeOtherReview: Agent reviewed

Cornell's committee report does not recommend the use of generative AI for summative evaluation or grading of student work, stating that evaluation and grading is among the most important tasks entrusted to faculty.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeOtherReview: Agent reviewed

Cornell's committee report recommends three policy approaches for generative AI use: prohibit GAI where it interferes with foundational learning, allow with attribution where it supports higher-level thinking, and encourage use where it enables exploration and creative thinking.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeOtherReview: Agent reviewed

Cornell has established seven core principles for generative AI in education: integrity of the faculty-student relation, commitment to experimentation and evidence, centrality of faculty judgment, responsiveness to student needs, recognition of both AI goods and harms, respect for institutional and disciplinary heterogeneity, and renewal of Cornell's core mission and values.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

The University of Tokyo

10 matching claims from 6 official sources.

Last checkedMay 9, 2026Review: Agent reviewedPublic JSON
Claim typeAi Tool TreatmentReview: Agent reviewed

University of Tokyo will not uniformly prohibit generative AI tools like ChatGPT in education; instead, it actively explores their potential while continuing dialogue on practical knowledge and long-term impact.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAcademic IntegrityReview: Agent reviewed

UTokyo states it is unacceptable to present AI-generated text as one's own when submitting class assignments.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAi Tool TreatmentReview: Agent reviewed

University of Tokyo will not uniformly prohibit generative AI tools like ChatGPT in educational settings, per official policy signed by the Executive Vice President.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

California Institute of Technology (Caltech)

9 matching claims from 2 official sources.

Last checkedMay 5, 2026Review: Agent reviewedPublic JSON
Claim typeTeachingReview: Agent reviewed

Caltech admissions requires all Fall 2026 applicants to review its admissions guidelines on the ethical use of AI before submitting supplemental essays. Failure to comply may result in rescission of admission.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

Caltech admissions prohibits applicants from copying and pasting directly from an AI generator, relying on AI-generated content to outline or draft essays, replacing their unique voice with AI-generated content, or translating essays via AI.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

Caltech admissions permits applicants to use AI tools like Grammarly or Microsoft Editor for grammar and spelling review of completed essays, to generate brainstorming questions or exercises, and to research the college application process.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Columbia University

9 matching claims from 8 official sources.

Last checkedMay 10, 2026Review: Agent reviewedPublic JSON
Claim typeAi Tool TreatmentReview: Agent reviewed

CUIMC provides HIPAA-compliant versions of ChatGPT Education and Microsoft Copilot as approved AI chatbot tools; workforce members must use CUIMC-issued accounts for compliance.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typePrivacyReview: Agent reviewed

CUIMC restricts sensitive data (PHI, RHI, PII) use with AI to HIPAA-compliant platforms only (ChatGPT Education, approved Microsoft Copilot, CHAT with compliant models); research use requires IRB approval.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

Columbia Law School's default AI prohibition can be overridden by individual instructors who set more permissive policies in writing in their syllabus.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

University of California, Berkeley (UCB)

9 matching claims from 5 official sources.

Last checkedMay 6, 2026Review: Agent reviewedPublic JSON
Claim typeOtherReview: Agent reviewed

UC Berkeley warns that individuals who accept click-through agreements for AI tools (such as OpenAI and ChatGPT terms of use) without delegated signature authority may face personal liability, including responsibility for compliance with terms and conditions.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeOtherReview: Agent reviewed

The UC Berkeley Academic Senate recommends that all faculty include a clear statement on their syllabus about course expectations regarding the use of Google Gemini or any other generative AI tool for course-related work. In the absence of such a statement, students may be more likely to use these technologies inappropriately.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeOtherReview: Agent reviewed

The UC Berkeley Academic Senate provides three sample syllabus statement frameworks for faculty: 'Full AI' (GenAI required), 'Some AI' (limited permitted use with restrictions), and 'No AI' (all GenAI use prohibited). Faculty should modify these to fit their course requirements.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

University of Oxford

9 matching claims from 6 official sources.

Last checkedMay 6, 2026Review: Agent reviewedPublic JSON
Claim typeAcademic IntegrityReview: Agent reviewed

Staff setting summative assessment must: declare whether/how students can use AI; review assessment design for alignment with permitted AI use; ensure equality of baseline AI tool provision where authorised; specify declaration forms for student AI use; only identify suspected unauthorised AI use through marking or university-endorsed detection tools (none currently endorsed); and handle misconduct under usual disciplinary regulations.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAcademic IntegrityReview: Agent reviewed

Students undertaking summative assessment must: complete assessment in line with the AI use declaration for each assignment; acknowledge their AI use via a formal declaration in the prescribed format; and understand that submitting work breaching AI specifications constitutes cheating and may constitute plagiarism, handled under usual disciplinary regulations.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAcademic IntegrityReview: Agent reviewed

The University's policy on AI use in summative assessment is based on three principles endorsed by Education Committee in Trinity term 2025: (1) educational practice must be grounded in values of integrity, honesty and transparency, which must be clearly articulated and frequently discussed; (2) every discrete unit of assessment must be carefully designed to be fit for its specific purposes, clearly articulated to students; (3) every summative assessment must be accompanied by a clear explanation of what appropriate assistance is permitted and what is forbidden, specifying how students should report assistance received.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Korea University

8 matching claims from 3 official sources.

Last checkedMay 12, 2026Review: Agent reviewedPublic JSON
Claim typeAi Tool TreatmentReview: Agent reviewed

Korea University states that its AI guideline primarily applies to generative AI used directly in teaching and learning, while its basic principles also apply to educational use of all AI tools.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typePrivacyReview: Agent reviewed

Korea University tells instructors not to enter personal information, academic records, assessment questions, or other sensitive or non-public materials into AI tools, with special caution for external AI services.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeSource StatusReview: Agent reviewed

Korea University University College Distance Learning Center distributed 2026 AI utilization guidelines and a guidebook to support responsible and effective AI use in teaching and learning.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

National University of Singapore (NUS)

8 matching claims from 3 official sources.

Last checkedMay 6, 2026Review: Agent reviewedPublic JSON
Claim typeAcademic IntegrityReview: Agent reviewed

NUS policy states that verdicts from AI detection tools are not admissible as conclusive evidence in disciplinary processes to charge students with academic dishonesty or to penalize student work.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAcademic IntegrityReview: Agent reviewed

NUS policy states that representing AI output as one's own work without acknowledgement is plagiarism; students who submit AI-generated work without acknowledging its use can be sanctioned for academic dishonesty.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

NUS states that instructors should be transparent about where and how they deploy AI in courses, including for generating content, virtual tutoring, and assessment feedback.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

The Chinese University of Hong Kong (CUHK)

8 matching claims from 7 official sources.

Last checkedMay 10, 2026Review: Agent reviewedPublic JSON
Claim typeAcademic IntegrityReview: Agent reviewed

At CUHK, improper or unauthorized use of AI tools in learning activities and assessments constitutes academic dishonesty and is subject to penalties including failure grade, suspension, or termination of studies.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAcademic IntegrityReview: Agent reviewed

CUHK requires students to declare in each assignment that they have read and understood the University's policy on AI use, complied with course teacher instructions on AI tools, and consent to AI content detection software review.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

CUHK defines four approaches to AI use in courses: (1) prohibit all use, (2) use only with prior permission, (3) use only with explicit acknowledgement, and (4) free use without acknowledgement requirement.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

University of British Columbia

8 matching claims from 10 official sources.

Last checkedMay 10, 2026Review: Agent reviewedPublic JSON
Claim typeAcademic IntegrityReview: Agent reviewed

Students may only use GenAI for assessed work (assignments, exams, projects, theses) if expressly permitted by their instructor, supervisor, or program.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAcademic IntegrityReview: Agent reviewed

All uses of GenAI at UBC must uphold academic integrity and adhere to the academic misconduct regulations in the UBC Okanagan and Vancouver calendars.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typePrivacyReview: Agent reviewed

UBC says instructors or teaching assistants cannot require students to use GenAI or any other technology tool that requires sharing personal information unless the tool has undergone a UBC Privacy Impact Assessment review and been approved for use with personal information.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

University of Cambridge

8 matching claims from 6 official sources.

Last checkedMay 5, 2026Review: Agent reviewedPublic JSON
Claim typeAcademic IntegrityReview: Agent reviewed

A student using any unacknowledged content generated by artificial intelligence within a summative assessment as though it is their own work constitutes academic misconduct, unless explicitly stated otherwise in the assessment brief.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeProcurementReview: Agent reviewed

The University's standard licensed GenAI tools are Microsoft 365 Copilot, Google Gemini, and Google NotebookLM. Use of other licensed GenAI tools is not prohibited but must be procured in accordance with applicable procurement policy, including completion of risk assessments such as DPIAs and/or ISRAs. The public, free versions of Copilot, Gemini and NotebookLM must not be used for University activities.

Evidence records: 2. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAcademic IntegrityReview: Agent reviewed

Staff should not rely on AI detection software as it is not proven to be accurate or reliable and provides no evidence to support investigations into the use of GenAI.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Johns Hopkins University

7 matching claims from 7 official sources.

Last checkedMay 10, 2026Review: Needs reviewPublic JSON
Claim typeAcademic IntegrityReview: Needs review

JHU requires all uses of AI tools in any assignment to be disclosed, with FERPA guidelines referenced for data protection.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Needs review

JHU provides official syllabus statement templates, including options that prohibit students from using ChatGPT or other AI tools to generate written content for assignments.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Needs review

JHU Engineering for Professionals provides faculty-facing guidance with detailed generative AI use categories, including categories where AI may not be used in any form.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Stanford University

7 matching claims from 13 official sources.

Last checkedMay 6, 2026Review: Agent reviewedPublic JSON
Claim typeAcademic IntegrityReview: Agent reviewed

Stanford's BCA issued guidance on generative AI use, and the Office of Community Standards recommends that instructors give advance notice to students when using AI detection software.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

For Stanford Graduate School of Business (GSB) MBA and MSx courses, instructors may not ban student use of AI tools for take-home coursework, including assignments and exams. Instructors may choose whether to allow AI for in-class work. For PhD and undergraduate courses, GSB follows the university-wide Generative AI Policy Guidance from the Office of Community Standards.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

Stanford School of Medicine MD and MSPA programs have a formal AI policy: students may use AI for learning, clarification, and grammar/style editing unless contrary to assignment instructions. AI use for closed-book exams or assignments where internet is restricted is prohibited unless explicitly authorized by faculty. Students are responsible for all AI-generated content they submit, must disclose and cite substantial AI contributions, and violations may result in disciplinary action.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

The University of Sydney

7 matching claims from 8 official sources.

Last checkedMay 10, 2026Review: Agent reviewedPublic JSON
Claim typeTeachingReview: Agent reviewed

From Semester 1 2025, the default position in the University of Sydney Academic Integrity Policy has been reversed: except for supervised examinations and supervised in-semester tests, students may use automated writing tools or generative AI to complete assessments unless expressly prohibited by the unit coordinator.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAcademic IntegrityReview: Agent reviewed

The University of Sydney's Academic Integrity Policy 2022 states it is an academic integrity breach to inappropriately generate content using artificial intelligence to complete an assessment task, and submitting an assessment generated by AI may be considered contract cheating.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

The University of Sydney has adopted a 'two-lane approach' to assessment: Lane 1 comprises secure, in-person supervised assessments to assure learning, and Lane 2 comprises open assessments that support and scaffold the use of all available and relevant tools including generative AI.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

University of California, San Diego (UCSD)

7 matching claims from 6 official sources.

Last checkedMay 13, 2026Review: Agent reviewedPublic JSON
Claim typeAcademic IntegrityReview: Agent reviewed

UC San Diego's Academic Integrity Policy says students may not let academic work or academic credit be completed for them by another human or by machine/artificial intelligence, and may not use unauthorized aids including artificial intelligence in coursework or assessments.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAcademic IntegrityReview: Agent reviewed

UC San Diego Academic Integrity Office student guidance says that if an instructor has not said a student can use GenAI for a class or assessment, the student cannot use it; silence does not equal permission.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAcademic IntegrityReview: Agent reviewed

UC San Diego Academic Integrity Office guidance says students authorized to use a GenAI tool should use it only in the way authorized for that assignment, should not assume authorization extends to other assignments or courses, and are advised to save history and acknowledge use to the professor.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

University of Pennsylvania

7 matching claims from 6 official sources.

Last checkedMay 6, 2026Review: Agent reviewedPublic JSON
Claim typeOtherReview: Agent reviewed

Penn provides several licensed AI tools to its community, including Copilot Chat (Basic, free), Adobe Express (free), ChatGPT-EDU (purchase required), M365 Copilot Premium (purchase required), Gemini for Google Workspace (purchase required), Google NotebookLM (purchase required), Grammarly Pro (purchase required), Snowflake Data Analytics (purchase required), and Zoom AI Companion (free).

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeOtherReview: Agent reviewed

It is not permissible under HIPAA or Penn Medicine policy to share patient or research participant information with open or public AI tools and services such as ChatGPT; individual patient data and data sets (even if deidentified) may not be exposed to such tools absent institutional approval.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeOtherReview: Agent reviewed

Instructors must not require students to enter their own work into unlicensed AI tools or use such tools in assignments; unlicensed tools may be used optionally by students at the instructor's discretion, but Penn-licensed tools should be used for mandatory coursework components.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Yale University

7 matching claims from 8 official sources.

Last checkedMay 10, 2026Review: Agent reviewedPublic JSON
Claim typeAcademic IntegrityReview: Agent reviewed

Yale academic integrity guidance treats inserting AI-generated text into an assignment without proper attribution as an academic integrity violation.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAi Tool TreatmentReview: Agent reviewed

Yale lists Clarity Platform as a Yale-provided AI chatbot platform housed within Yale secure infrastructure and available to staff, faculty, and students.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAcademic IntegrityReview: Agent reviewed

Yale expects faculty to give clear instructions on permitted AI use and attribution, and expects students to follow instructor guidelines for coursework.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

ETH Zurich

6 matching claims from 5 official sources.

Last checkedMay 5, 2026Review: Agent reviewedPublic JSON
Claim typeTeachingReview: Agent reviewed

ETH Zurich advocates a proactive approach to the use of generative AI in educational contexts, emphasising responsible use among students and lecturers.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAcademic IntegrityReview: Agent reviewed

Students are responsible for the content of work they submit. Performance assessments must be conducted independently and personally; GenAI may serve a supplementary role but not replace student efforts.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

Lecturers determine whether and how GenAI may be used in their courses and for respective assessments. Teaching materials created with GenAI must be subjected to quality control by the lecturer.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Harvard University

6 matching claims from 12 official sources.

Last checkedMay 6, 2026Review: Agent reviewedPublic JSON
Claim typeTeachingReview: Agent reviewed

FAS (Faculty of Arts and Sciences) Office of Undergraduate Education policy: All faculty are required to inform students of the policies governing generative AI use in class. Faculty should post their AI policy on their Canvas site.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAcademic IntegrityReview: Agent reviewed

HGSE (Harvard Graduate School of Education) school-level policy: Unless otherwise specified by the instructor, using generative AI to create all or part of an assignment (e.g., paper, memo, presentation, short response) and submitting it as one's own work violates the HGSE Academic Integrity Policy. Permissible uses include seeking clarification on concepts, brainstorming ideas, or generating scenarios that help contextualize learning.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAcademic IntegrityReview: Agent reviewed

HGSE (Harvard Graduate School of Education) school-level policy: For any permitted use of generative AI tools, students must acknowledge and document that use in their assignment submission by explaining what tool(s) were used, prompts provided, and how the output was integrated into the work. Direct citations must use proper citation format.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

King's College London

6 matching claims from 6 official sources.

Last checkedMay 10, 2026Review: Agent reviewedPublic JSON
Claim typeAcademic IntegrityReview: Agent reviewed

At King's College London, inappropriate use of generative AI without attribution is considered academic misconduct and can result in penalties ranging from formal warnings to expulsion.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAcademic IntegrityReview: Agent reviewed

King's College London does not require students to reference generative AI as an authoritative source in the reference list, but does require explicit acknowledgement of AI tool use in coursework.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAcademic IntegrityReview: Agent reviewed

At King's College London, submitting AI-generated text as one's own without written departmental permission is considered misconduct under third-party involvement or text manipulation offences.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Lund University

6 matching claims from 5 official sources.

Last checkedMay 13, 2026Review: Agent reviewedPublic JSON
Claim typeTeachingReview: Agent reviewed

Lund University's Swedish policy says generative AI use is to support learning and research and does not replace basic skills, critical thinking, or scientific method.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAcademic IntegrityReview: Agent reviewed

Lund University's student guidance says students who want to use GenAI for a compulsory assignment or examination must check whether it is permitted and how to report its use; presenting GenAI-generated work as one's own may be treated as cheating.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typePrivacyReview: Agent reviewed

Lund University's staff AI page says users may not write or upload sensitive material or sensitive personal data to ChatGPT, and may never upload medical information regardless of confidentiality status.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

The University of Auckland

6 matching claims from 7 official sources.

Last checkedMay 13, 2026Review: Agent reviewedPublic JSON
Claim typeAi Tool TreatmentReview: Agent reviewed

The University of Auckland's Assessment of Courses Procedures state that AI use in assessment tasks may only be restricted when the task is a controlled assessment, identified as Lane 1; AI may be used without restriction in other assessment tasks, identified as Lane 2.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

The University of Auckland's Assessment of Courses Procedures require courses to use the two-lane nomenclature, including telling students which assessments align with Lane 1 or Lane 2, and require courses and programmes to implement the two-lane approach in assessment design by 2027.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAcademic IntegrityReview: Agent reviewed

The University of Auckland's student AI advice states that AI has no agency, treats the student prompting an AI tool as the author, and says students are ultimately responsible for work submitted for assessment.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

The University of Manchester

6 matching claims from 6 official sources.

Last checkedMay 10, 2026Review: Agent reviewedPublic JSON
Claim typeAi Tool TreatmentReview: Agent reviewed

The University of Manchester does not ban generative AI. The university's position is that when used appropriately, AI tools have the potential to enhance teaching and learning, and can support inclusivity and accessibility.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

Manchester has adopted five core principles for AI use: transparency, accountability, competence, responsible use, and respect. All staff and students using or developing AI are personally responsible for adhering to these.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAcademic IntegrityReview: Agent reviewed

Tools to detect AI-generated content are unreliable and biased and cannot be relied on to identify academic malpractice in summative assessment at Manchester. Output from such tools cannot currently be used as evidence of malpractice.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

The University of Melbourne

6 matching claims from 5 official sources.

Last checkedMay 6, 2026Review: Agent reviewedPublic JSON
Claim typeOtherReview: Agent reviewed

At the University of Melbourne, using GenAI tools to produce work submitted for assessment without acknowledgement constitutes academic misconduct under cl. 4.13 of the Student Academic Integrity Policy (MPF1310).

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeOtherReview: Agent reviewed

Students must check with their Subject Coordinator before using GenAI for assessment-related work at the University of Melbourne.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeOtherReview: Agent reviewed

University of Melbourne assessment materials and teaching materials constitute University IP and should never be tested on third-party external GenAI platforms such as ChatGPT; any such testing must be done only within the University's secure SparkAI platform.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

University of Texas at Austin

6 matching claims from 6 official sources.

Last checkedMay 13, 2026Review: Agent reviewedPublic JSON
Claim typePrivacyReview: Agent reviewed

UT Austin acceptable-use guidance says unauthorized AI tools are not approved for controlled or confidential university information, including student records subject to FERPA, health information, proprietary information, and other controlled or confidential data.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeProcurementReview: Agent reviewed

UT Austin AI detection guidance prohibits third-party AI detection software from being used to evaluate student work or assignments unless a university contract or purchase order is in place.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

UT Austin responsible-adoption guidance defines responsible AI use in teaching and learning as adopting AI in ways that facilitate learning outcomes and foster human development for campus community members.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Duke University

5 matching claims from 6 official sources.

Last checkedMay 12, 2026Review: Agent reviewedPublic JSON
Claim typeAi Tool TreatmentReview: Agent reviewed

Duke student-facing AI guidance says whether a student may use AI in coursework depends on instructor permission; unauthorized generative AI use is considered academic misconduct under the Duke Community Standard.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

Duke CTL guidance tells instructors to update syllabi with clear guidance on generative AI use and says instructors may define how, if, and when generative AI may be used in their courses.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeSecurity ReviewReview: Agent reviewed

Duke AI tool guidance describes ChatGPT as available to Duke University faculty, staff, and students, with sensitive-data use excluding PHI and governed by institutional agreement.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Monash University

5 matching claims from 9 official sources.

Last checkedMay 10, 2026Review: Agent reviewedPublic JSON
Claim typeAcademic IntegrityReview: Agent reviewed

When allowed or required to use AI in an assessment, students must follow all instructions and restrictions on its use, clearly document the type of AI used and how it contributed, and provide written acknowledgment of the use of AI and its extent.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

Chief Examiners have overarching responsibility for designing and setting assessment conditions, including communicating and verifying the responsible use of AI within assessment tasks.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeResearchReview: Agent reviewed

Thesis examiners are not permitted to use Generative AI technologies (such as ChatGPT) during the thesis examination process to support, prepare, or write their examiners' report.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

National Taiwan University (NTU)

5 matching claims from 4 official sources.

Last checkedMay 12, 2026Review: Agent reviewedPublic JSON
Claim typeTeachingReview: Agent reviewed

National Taiwan University guidance says it takes a positive and constructive view of AI tools, encourages teachers to adjust course planning and learning assessment, and says students should understand AI-tool limitations for future learning.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

National Taiwan University guidance recommends that instructors clarify AI-use principles and rules early in the course, preferably in the syllabus, including which activities and assignments may or may not use AI tools.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAcademic IntegrityReview: Agent reviewed

National Taiwan University guidance says students using ChatGPT for assignments or reports should clearly label AI-generated content, fact-check it, and comply with academic ethics and academic-integrity requirements.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Princeton University

5 matching claims from 8 official sources.

Last checkedMay 10, 2026Review: Agent reviewedPublic JSON
Claim typeAcademic IntegrityReview: Agent reviewed

Princeton University requires students to disclose the use of generative AI when permitted by the instructor, rather than cite or acknowledge the use, since generative AI is an algorithm rather than a source (Rights, Rules, Responsibilities section 2.4.7).

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAcademic IntegrityReview: Agent reviewed

Princeton University states that inappropriate uses of generative AI on any work submitted to fulfill an academic requirement, including directly copying the output, representing output as the student's own, exceeding instructor parameters, or failing to disclose its use, would constitute violations of academic integrity (Rights, Rules, Responsibilities section 2.4.6).

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

Princeton University states that the decision to allow, limit, or prohibit generative AI in a course or in undergraduate independent work remains with the faculty; faculty members have the discretion to set their own generative AI policy for their courses.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Technical University of Munich

5 matching claims from 1 official source.

Last checkedMay 10, 2026Review: Agent reviewedPublic JSON
Claim typeTeachingReview: Agent reviewed

TUM ProLehre guidance says instructors at TUM have broad discretion when deciding whether and how AI is used in teaching, and that related rules should be didactically grounded and communicated transparently to students.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

When AI use is restricted, TUM ProLehre guidance tells instructors to clearly define what AI may be used for, what it may not be used for, and to discuss this with students.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

TUM ProLehre guidance recommends starting AI-use decisions from the intended learning outcomes and whether AI use supports, complements, or hinders those competencies.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

The Hong Kong Polytechnic University

5 matching claims from 4 official sources.

Last checkedMay 12, 2026Review: Agent reviewedPublic JSON
Claim typeAi Tool TreatmentReview: Agent reviewed

PolyU takes an open and forward-looking stance on the use of GenAI tools as a positive and creative force in education, and expects that the usage of generative AI will become a normal part of learning, teaching, and assessment from 2023/24 Semester One.

Evidence records: 2. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

PolyU guidelines state that work submitted for assessment must be the student's own work and must not be a copy or version of other people's work or AI-generated material.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAcademic IntegrityReview: Agent reviewed

PolyU states that while it embraces the use of GenAI tools in education, students must adhere to high standards of academic integrity in all forms of assessments.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

UCL

5 matching claims from 2 official sources.

Last checkedMay 5, 2026Review: Agent reviewedPublic JSON
Claim typeAcademic IntegrityReview: Agent reviewed

UCL uses a 3-category assessment framework for GenAI: Category 1 requires own work only; Category 2 permits GenAI with acknowledgement; Category 3 includes essential GenAI use as part of the assessment.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAi Tool TreatmentReview: Agent reviewed

UCL designates Microsoft Copilot as its approved GenAI tool due to its enhanced data protection, positioning it as a more secure alternative to other GenAI services.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAcademic IntegrityReview: Agent reviewed

UCL defines academic misconduct in the context of GenAI as gaining an unfair advantage over other students; there is no single list of fair and unfair uses as it depends on the assessment category.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Université Paris-Saclay

5 matching claims from 3 official sources.

Last checkedMay 12, 2026Review: Agent reviewedPublic JSON
Claim typeAcademic IntegrityReview: Agent reviewed

Université Paris-Saclay's 2025-2026 first-cycle exam rules say that, for Licence professionnelle, Licence, and Licence double-diplôme students covered by the rules, use of ChatGPT or another AI tool must be explicitly mentioned when it is not prohibited, and failure to mention AI as a source will be sanctioned.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAcademic IntegrityReview: Agent reviewed

Université Paris-Saclay's 2025-2026 master exam rules say that use of ChatGPT or another AI tool must be explicitly mentioned when it is not prohibited, like any external source borrowing or citation, and failure to mention AI as a source will be sanctioned.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

Université Paris-Saclay's official IAG working-group article says the group aims to familiarize teacher-researchers and students with generative AI and orient them toward good practices in teaching and research.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Universiti Malaya (UM)

5 matching claims from 3 official sources.

Last checkedMay 12, 2026Review: Agent reviewedPublic JSON
Claim typeTeachingReview: Agent reviewed

Universiti Malaya's AI policy applies to academic staff and students across teaching and learning activities, including coursework, research projects, dissertations, final-year projects, theses, and online or blended learning activities.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAcademic IntegrityReview: Agent reviewed

Universiti Malaya student guidance frames AI as a learning support tool and says students remain responsible for ensuring submitted work reflects their own understanding and intellectual contribution.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

Universiti Malaya guidance says lecturers specify the permitted level of AI use for each assignment or assessment, using levels from no AI use through integrated AI use.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Brown University

4 matching claims from 6 official sources.

Last checkedMay 12, 2026Review: Agent reviewedPublic JSON
Claim typeAi Tool TreatmentReview: Agent reviewed

Brown OIT guidance says Google Gemini Chat and NotebookLM are accessible at no cost to Brown and can be used with data classified as Risk Level 3, unlike consumer AI services named on the page with which Brown does not have agreements.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAcademic IntegrityReview: Agent reviewed

Brown Provost guidance says any unapproved use of AI to complete assignments would be covered by Brown’s Academic Code and Graduate Student Edition Academic Code.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

Brown University Provost guidance says the University is not prescribing specific AI policies, and that faculty should give clear, unambiguous information about what AI use is and is not allowed in their courses.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

EPFL – École polytechnique fédérale de Lausanne

4 matching claims from 6 official sources.

Last checkedMay 10, 2026Review: Agent reviewedPublic JSON
Claim typeAcademic IntegrityReview: Agent reviewed

EPFL requires students to disclose the use of AI tools in assessment work. EPFL rules (Lex 1.3.3, Article 4) require that all assessment material that is not the student's personal and original contribution must be recognizable as such.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

EPFL recommends that teachers make explicit to students what AI use is not legitimate in a course and what rules accompany AI tool use.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAcademic IntegrityReview: Agent reviewed

EPFL considers the use of AI-generated content in assignments without proper attribution as AI plagiarism. Tools that detect AI-generated content are not admissible as stand-alone evidence of AI plagiarism due to high risk of false positives.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Imperial College London

4 matching claims from 14 official sources.

Last checkedMay 6, 2026Review: Agent reviewedPublic JSON
Claim typeTeachingReview: Agent reviewed

Individual departments at Imperial may allow or prohibit the use of generative AI for specific assessments. Local (team/department/faculty) instructions take precedence over university-wide guidance. Students should check their department's current policy on using and disclosing generative AI in academic work and follow their module leader's instructions.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

Students should include a statement acknowledging their use of generative AI tools for all assessed work, specifying the tool name and version, publisher, URL, a brief description of how it was used, and confirmation that the work is their own. Further requirements such as prompts used, date of output, the output obtained, and how it was modified may also be required by individual departments.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeResearchReview: Agent reviewed

Research at Imperial that involves people, personal data, or sensitive topics may require ethics approval, a Data Protection Impact Assessment (DPIA), and data-governance controls before using any AI tool. Researchers must verify whether their use of AI in research requires special approval, particularly when uploading private or confidential research data.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Institut Polytechnique de Paris

4 matching claims from 3 official sources.

Last checkedMay 10, 2026Review: Agent reviewedPublic JSON
Claim typeAcademic IntegrityReview: Agent reviewed

Institut Polytechnique de Paris's 2025-2026 Master programs academic regulations prohibit the use of generative AI in assessments for those programs unless explicitly authorized by the instructor in written instructions. Unauthorized use constitutes academic misconduct.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAcademic IntegrityReview: Agent reviewed

Le règlement des études 2025-2026 des masters de l'Institut Polytechnique de Paris interdit l'utilisation de l'intelligence artificielle générative dans les évaluations de ces programmes, sauf autorisation explicite de l'enseignant dans ses consignes écrites. Tout manquement est considéré comme une fraude.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

Under Institut Polytechnique de Paris's 2025-2026 Master programs academic regulations, when generative AI use is explicitly permitted by an instructor, students must clearly acknowledge its use in accordance with standard citation practices.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Massachusetts Institute of Technology (MIT)

4 matching claims from 4 official sources.

Last checkedMay 6, 2026Review: Agent reviewedPublic JSON
Claim typeProcurementReview: Agent reviewed

IS&T recommends that MIT community members consult with IS&T before purchasing or using generative AI tools, and recommends using tools already licensed by IS&T for the MIT community.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeOtherReview: Agent reviewed

MIT maintains a list of approved generative AI tools licensed by IS&T for use by the MIT community. Only these tools are approved for use with low- and medium-risk information, and any tool not on the list requires contacting ai-guidance@mit.edu for assessment before use or purchase. No generative AI tools are approved for use with High Risk MIT information.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeOtherReview: Agent reviewed

MIT prohibits the use of generative AI for purposes that may require in-depth risk assessments without prior consultation with ai-guidance@mit.edu. Such purposes include recruitment and hiring of employees, evaluating student academic performance, making investment decisions, and complaint and dispute resolution.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

The University of Queensland

4 matching claims from 5 official sources.

Last checkedMay 10, 2026Review: Agent reviewedPublic JSON
Claim typeTeachingReview: Agent reviewed

UQ course profiles must clearly state if, when, and how AI (including Machine Translation) is allowed. Two options exist: Option 1 prohibits AI in in-person assessment; Option 2 permits AI use with mandatory referencing.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAcademic IntegrityReview: Agent reviewed

UQ has disabled the Turnitin AI writing indicator functionality for all assessments from Semester 2, 2025, citing that AI detection tools are flawed and unreliable.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAcademic IntegrityReview: Agent reviewed

At UQ, the use of AI outputs without attribution, and contrary to any direction by teaching staff, is a form of plagiarism and constitutes academic misconduct.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Tsinghua University

4 matching claims from 2 official sources.

Last checkedMay 6, 2026Review: Agent reviewedPublic JSON
Claim typeOtherReview: Agent reviewed

Tsinghua University prohibits students from directly copying or mechanically paraphrasing AI-generated text, code, or other output and submitting it as academic coursework.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeOtherReview: Agent reviewed

Tsinghua University affirms that AI must remain an auxiliary tool and that teachers and students are the primary agents in teaching and learning (principal responsibility principle).

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeOtherReview: Agent reviewed

Tsinghua University advises instructors to determine how AI should be used according to course objectives, clearly explain AI usage norms to students at the start of each course, and remain responsible for AI-generated teaching materials.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

University of Amsterdam

4 matching claims from 5 official sources.

Last checkedMay 11, 2026Review: Agent reviewedPublic JSON
Claim typeTeachingReview: Agent reviewed

The University of Amsterdam has a policy framework for Generative AI in education that provides central guidelines for responsible use of GenAI based on scientific integrity, with room for faculties and programmes to translate the policy into their own educational practice.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAcademic IntegrityReview: Agent reviewed

At the UvA, both the Netherlands Code of Conduct for Research Integrity and the European Code of Conduct for Research Integrity apply, which outline the principles and standards for integrity in research and teaching.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAi Tool TreatmentReview: Agent reviewed

The University of Amsterdam is developing its own AI chat environment called UvA AI Chat, which is similar to ChatGPT but is fully self-managed and specifically designed for UvA students, lecturers and staff.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

University of Bristol

4 matching claims from 4 official sources.

Last checkedMay 12, 2026Review: Agent reviewedPublic JSON
Claim typeResearchReview: Agent reviewed

PGR students at University of Bristol are not permitted to use generative AI tools such as ChatGPT to write any text used in their thesis or APM reports, as research degree students must demonstrate ability to write about research in their own words.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAcademic IntegrityReview: Agent reviewed

University of Bristol considers the use of AI or translation tools to be cheating if used for more than generating the occasional short phrase within a sentence or checking basic grammar and spelling, unless assessment instructions allow more comprehensive use.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

University of Bristol has published official guidance on generative AI use in taught degree programmes, stating that generative AI should not replace activities that develop intellectual rigour, student agency, and students' capacity to work through complex problems themselves.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

University of California, Los Angeles (UCLA)

4 matching claims from 3 official sources.

Last checkedMay 11, 2026Review: Agent reviewedPublic JSON
Claim typeAcademic IntegrityReview: Agent reviewed

UCLA Academic Senate guidance quotes the Student Conduct Code requirement that submissions must be the student’s own work or clearly acknowledge the source, and says that unless an instructor indicates otherwise, use of ChatGPT or other AI tools for course assignments is akin to receiving assistance from another person.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typePrivacyReview: Agent reviewed

UCLA DTS guidance says users may not input FERPA-protected student information, HIPPA/HIPAA-protected health data, employee personnel/performance data, unpublished research/IP/grant proposals, or export-controlled or restricted data into AI tools unless explicitly approved in a secure environment.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAi Tool TreatmentReview: Agent reviewed

UCLA DTS AI guidance lists a subset of available generative AI tools including Microsoft Copilot and M365 Copilot, Google Gemini, OpenAI ChatGPT Enterprise, Google Notebook LM, AWS Bedrock models, and Zoom AI Companion.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

University of Illinois Urbana-Champaign

4 matching claims from 6 official sources.

Last checkedMay 13, 2026Review: Agent reviewedPublic JSON
Claim typeSecurity ReviewReview: Agent reviewed

Illinois Enterprise GenAI guidance for service providers includes security controls, audits, MFA, and data privacy compliance for AI systems and sensitive data.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAcademic IntegrityReview: Agent reviewed

Illinois Enterprise GenAI transparency guidance says students need to be transparent about AI use in coursework and cite AI tools according to faculty expectations.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

Illinois CITL teaching guidance says faculty should define boundaries for AI use in student work and teach citation of AI-generated text and ideas.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Carnegie Mellon University

3 matching claims from 6 official sources.

Last checkedMay 12, 2026Review: Agent reviewedPublic JSON
Claim typeAi Tool TreatmentReview: Agent reviewed

CMU Eberly Center guidance identifies a growing list of CMU-vetted generative AI tools that are FERPA compliant for teaching and learning when used as instructed.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

CMU Eberly Center guidance says instructors should clarify whether AI tools count as authorized or unauthorized assistance and how students should cite AI or human assistance.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAcademic IntegrityReview: Agent reviewed

CMU academic-integrity policy requires instructor authorization for collaboration or assistance on graded work and requires citation of all sources.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

KFUPM

3 matching claims from 1 official source.

Last checkedMay 13, 2026Review: Agent reviewedPublic JSON
Claim typeTeachingReview: Agent reviewed

KFUPM's AI+X page describes AI+X as an initiative designed to equip every undergraduate student with essential AI skills and says students begin with foundational AI coursework before their selected academic programs.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

KFUPM's AI+X page says all undergraduate students are required to complete AI-focused coursework and states an institutional goal to graduate 10,000 AI-skilled professionals by 2030.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAi Tool TreatmentReview: Agent reviewed

KFUPM's AI+X page states that KFUPM launched ChatGPT Edu in partnership with OpenAI and describes the initiative as integrating AI tools into classrooms and research environments.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

KU Leuven

3 matching claims from 5 official sources.

Last checkedMay 12, 2026Review: Agent reviewedPublic JSON
Claim typeAi Tool TreatmentReview: Agent reviewed

KU Leuven permits and encourages responsible, critical use of generative AI in teaching and research, framing it as a complement to critical thinking and professional expertise rather than a replacement.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAcademic IntegrityReview: Agent reviewed

KU Leuven student guidance says students remain fully responsible for what they submit and must ensure assignments allow teaching staff to assess their acquired competences.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

KU Leuven teaching guidance expects teaching staff to clearly inform students whether GenAI may be used for assignments and expects students to be transparent about GenAI use so assessment can be fair and correct.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Kyoto University

3 matching claims from 3 official sources.

Last checkedMay 12, 2026Review: Agent reviewedPublic JSON
Claim typeAcademic IntegrityReview: Agent reviewed

Kyoto University tells students to follow the generative AI use policy set by the course instructor or research supervisor for coursework, reports, and thesis writing.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

Kyoto University has adopted an education and learning AI initiative intended to support responsible generative AI use by faculty, staff, and students while minimizing learning risks.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

Kyoto University expects instructors to state their generative AI use policy to students and, for courses focused on basic knowledge or skills, to preserve grading fairness through checks such as written or oral examinations.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Ludwig-Maximilians-Universität München

3 matching claims from 3 official sources.

Last checkedMay 12, 2026Review: Agent reviewedPublic JSON
Claim typeAi Tool TreatmentReview: Agent reviewed

LMU IfKW student guidance says AI may be used in assessments only with explicit teacher permission; if no explicit permission is given, students must assume AI use is not allowed.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAcademic IntegrityReview: Agent reviewed

LMU IfKW guidance treats verbatim or minimally changed AI-generated text without proper attribution as plagiarism, and says significant unattributed AI-generated text in assessed work can receive grade 5 (failed).

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

LMU teaching guidance recommends adapting e-exam questions for ChatGPT-era assessment, including tasks that require critical reflection on ChatGPT limitations rather than simple knowledge or comprehension questions.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Nanyang Technological University, Singapore (NTU Singapore)

3 matching claims from 3 official sources.

Last checkedMay 5, 2026Review: Agent reviewedPublic JSON
Claim typeResearchReview: Agent reviewed

NTU states that generative AI should not be listed as an author of any paper with NTU affiliation, or as a Principal Investigator, Co-PI, or collaborator in research proposals.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

NTU guidelines state that AI detector tools should be used with caution due to frequent false positives and negatives, ease of bypass, and bias against non-native English writing patterns.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

NTU requires students to disclose the use of AI tools in their submissions and to always refer to their module's AI use policy for specific expectations.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Northwestern University

3 matching claims from 6 official sources.

Last checkedMay 10, 2026Review: Agent reviewedPublic JSON
Claim typeTeachingReview: Agent reviewed

Northwestern provides instructors with three course-level AI policy options: Open (GAI permitted), Conditional (GAI permitted when explicitly authorized), and Closed (GAI prohibited).

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAcademic IntegrityReview: Agent reviewed

Unauthorized use of ChatGPT or other Generative AI tools is considered cheating and/or plagiarism per Northwestern Academic Integrity guidelines.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAi Tool TreatmentReview: Agent reviewed

Microsoft Copilot, when signed in with a Northwestern Microsoft account, is approved for Level 2 and generally Level 3 data. Publicly available AI tools (ChatGPT, Gemini, MidJourney) may only be used with Level 1 public data.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Seoul National University

3 matching claims from 7 official sources.

Last checkedMay 10, 2026Review: Agent reviewedPublic JSON
Claim typeTeachingReview: Agent reviewed

SNU requires instructors to communicate AI tool usage policies (permitted/prohibited scope and reporting methods) through syllabi, and students must comply.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeProcurementReview: Agent reviewed

SNU Library warns that uploading subscribed e-resources to ChatGPT, Claude, Gemini or similar AI services, or performing bulk downloads, may exceed publisher license terms and could result in access being blocked for the entire university.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeResearchReview: Agent reviewed

SNU Library advises researchers to check target journal editorial policies from the planning stage, as policies vary on whether LLMs can be listed as authors and whether AI-generated text is permitted.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Sorbonne University

3 matching claims from 3 official sources.

Last checkedMay 13, 2026Review: Agent reviewedPublic JSON
Claim typeAcademic IntegrityReview: Agent reviewed

Sorbonne University's 2024-2025 assessment rules state that assessment documents must be the student's or assessed group's personal work, AI use is refused unless explicitly authorized, and authorized AI use should mention the source.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAcademic IntegrityReview: Agent reviewed

Sorbonne University's 2024-2025 assessment rules treat unauthorized AI-generated work presented as one's own, or authorized AI use without source mention, as plagiarism.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeResearchReview: Agent reviewed

Sorbonne University's Faculty of Health research recommendations distinguish translation-only software such as DeepL or Linguee from generative AI tools and say using generative AI tools such as ChatGPT for translation is not recommended.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

The Hong Kong University of Science and Technology

3 matching claims from 7 official sources.

Last checkedMay 10, 2026Review: Agent reviewedPublic JSON
Claim typeTeachingReview: Agent reviewed

HKUST allows faculty members the flexibility to set their own course-level policies for GenAI integration in teaching and learning.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAi Tool TreatmentReview: Agent reviewed

HKUST provides approved generative AI tools including OpenWebUI, HKUST GenAI Platform, Google Gemini Enterprise, Microsoft Copilot Chat, and Microsoft 365 Copilot.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAcademic IntegrityReview: Agent reviewed

HKUST assessment policies require course syllabi to clearly present policies on the use of Generative AI tools and academic integrity.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

The London School of Economics and Political Science (LSE)

3 matching claims from 6 official sources.

Last checkedMay 12, 2026Review: Agent reviewedPublic JSON
Claim typeAi Tool TreatmentReview: Agent reviewed

LSE states that making generative AI tools available does not endorse unrestricted use, and users should check their specific course, programme, or department policy.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAi Tool TreatmentReview: Agent reviewed

LSE requires departments or course convenors to classify authorised generative AI use in assessment as no authorised use, limited authorised use, or full authorised use, and to communicate the position to students.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAcademic IntegrityReview: Agent reviewed

For 2025/26, LSE requested departments to add assessment safeguards, including observed assessment methods, to help assure degree integrity and prevent unfair competitive advantage from generative AI.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

University of Toronto

3 matching claims from 7 official sources.

Last checkedMay 11, 2026Review: Agent reviewedPublic JSON
Claim typeAcademic IntegrityReview: Agent reviewed

University of Toronto considers use of generative AI tools on marked assessments without instructor permission to be use of an unauthorized aid under the Code of Behaviour on Academic Matters.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAi Tool TreatmentReview: Agent reviewed

University of Toronto teaching guidance says Microsoft Copilot is the recommended generative AI tool to use at U of T and, when signed in with University credentials, conforms to U of T privacy and security standards for use with up to level 3 data.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

University of Toronto recommends that instructors include a statement on their syllabus that informs students about expectations with respect to the use of AI, and provides sample syllabus statements for instructors to use.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

McGill University

2 matching claims from 5 official sources.

Last checkedMay 11, 2026Review: Agent reviewedPublic JSON
Claim typeAcademic IntegrityReview: Agent reviewed

McGill's Provost-endorsed principles state that instructors remain responsible for comporting themselves according to the highest standards of academic integrity in their use of generative AI tools. Instructors must be explicit in course outlines about the expectations for use of generative AI tools and may set limits on their use in assessment tasks.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

McGill recommends that instructors explain to students in their course outline what the appropriate use or non-use is of generative AI tools in the context of that course. The use or non-use of these tools should align with the learning outcomes associated with the course.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Peking University

2 matching claims from 1 official source.

Last checkedMay 6, 2026Review: Agent reviewedPublic JSON
Claim typeOtherReview: Agent reviewed

Peking University's AI use guidelines apply to faculty, students, researchers, and administrators who use generative AI or other AI-assisted tools in teaching, research, and management activities.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeOtherReview: Agent reviewed

Peking University's AI Scientific Integrity Platform synthesizes AI use policies from 18 domestic and international sources, including Chinese government agencies (MOST, NSFC), Chinese universities (Fudan, Nanjing, Sichuan), international bodies (EU Commission, NIH), and universities (Harvard, Yale, Cambridge, UCL, Oxford, MIT).

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Shanghai Jiao Tong University

2 matching claims from 3 official sources.

Last checkedMay 11, 2026Review: Agent reviewedPublic JSON
Claim typeTeachingReview: Agent reviewed

上海交通大学规定教师是 AI+ 教学设计的第一责任人,应在遵守教育教学相关规章制度前提下合理、合规、有效使用人工智能技术产品或服务,并围绕场景应用、风险提示、应急预案等环节设置具体规范。

Evidence records: 2. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAcademic IntegrityReview: Agent reviewed

上海交通大学规定学生应了解并遵守各课程的人工智能使用规范,在课堂学习、作业反馈等环节遵循教学计划、知识产权法律法规和学术诚信要求;未经授权使用在线学习支持平台等行为被列为学术不端的一类。

Evidence records: 2. Original evidence remains canonical on the linked university record and public JSON.

The University of Warwick

2 matching claims from 5 official sources.

Last checkedMay 12, 2026Review: Agent reviewedPublic JSON
Claim typeAcademic IntegrityReview: Agent reviewed

Warwick's student-facing guidance says students may use AI only within requirements set out in assessment briefs and course handbooks, which may restrict or prohibit AI use.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

Warwick's assessment-design guidance says generative AI use in student submissions needs thoughtful support so responsible use and clear demonstration of human achievement are maintained.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

University of Chicago

2 matching claims from 4 official sources.

Last checkedMay 5, 2026Review: Agent reviewedPublic JSON
Claim typeAi Tool TreatmentReview: Agent reviewed

At the University of Chicago, ChatGPT 3.5 and ChatGPT 4.0 are approved only for data that is made publicly available by its source, with restrictions limiting use to non-sensitive information.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAi Tool TreatmentReview: Agent reviewed

The University of Chicago provides a central hub at genai.uchicago.edu for information on generative AI tools, training, resources, and guidance for the university community.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

University of Michigan-Ann Arbor

2 matching claims from 8 official sources.

Last checkedMay 10, 2026Review: Agent reviewedPublic JSON
Claim typeAcademic IntegrityReview: Agent reviewed

U-M requires AI use in teaching and learning to align with principles of honesty, candor, openness, and integrity in scholarship and research, including appropriate disclosure and citation.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

U-M leaves GenAI policy to individual instructors, who may allow, restrict, or forbid AI use in their courses. Course policies should be clearly articulated in syllabi.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Yonsei University

2 matching claims from 1 official source.

Last checkedMay 12, 2026Review: Agent reviewedPublic JSON
Claim typeTeachingReview: Agent reviewed

Yonsei University Research Ethics Center guidance says generative AI use in courses should be based on clear agreement between instructors and learners, with instructors giving minimum guidance when use is permitted.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAcademic IntegrityReview: Agent reviewed

Yonsei University Research Ethics Center guidance tells students that generative AI may be permitted or restricted depending on the course and that students should consult the syllabus and course policies.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

City University of Hong Kong (CityUHK)

1 matching claim from 1 official source.

Last checkedMay 12, 2026Review: Agent reviewedPublic JSON
Claim typeTeachingReview: Agent reviewed

CityUHK has an official guideline index for the use of generative AI tools in teaching and learning, effective from Semester A 2025/26, with separate staff and student guideline links.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Delft University of Technology

1 matching claim from 1 official source.

Last checkedMay 11, 2026Review: Agent reviewedPublic JSON
Claim typeAi Tool TreatmentReview: Agent reviewed

TU Delft teaching guidance says Microsoft Copilot Chat, after login with NetID, is currently the only generative AI tool permitted for use at TU Delft for education.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

New York University (NYU)

1 matching claim from 6 official sources.

Last checkedMay 12, 2026Review: Agent reviewedPublic JSON
Claim typeTeachingReview: Agent reviewed

NYU offers access and support for Google's Gemini and NotebookLM to faculty, staff, and students through institutional accounts.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

The University of Edinburgh

1 matching claim from 6 official sources.

Last checkedMay 10, 2026Review: Agent reviewedPublic JSON
Claim typeAcademic IntegrityReview: Agent reviewed

At the University of Edinburgh, presenting AI outputs as your own original work, submitting AI-generated text without acknowledgment, and using AI agents within university learning platforms constitute academic misconduct.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Université PSL

1 matching claim from 2 official sources.

Last checkedMay 11, 2026Review: Agent reviewedPublic JSON
Claim typeAcademic IntegrityReview: Agent reviewed

Université PSL academic regulations state that, when an instructor authorizes the use of AI-based tools such as ChatGPT, that use must be explicitly disclosed like a source citation; failure to disclose AI use is considered plagiarism and sanctioned as such.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Data and advice boundaries

Theme pages expose index slices, not new conclusions.

  • Public pages and public JSON should remain consistent because both are built from the promoted public release dataset.
  • Original-language evidence is canonical. Translations and display summaries are auxiliary.
  • Confidence is separate from reviewState; reviewState describes workflow status.
  • Tracker metadata is open licensed. Official source documents, page text, PDFs, and other source materials retain their original rights and terms.
  • This tracker is not legal advice, not academic integrity advice, and not an official university statement unless a linked source is the university's own official page.
  • Theme matching is based on visible public claim and evidence text; it is not a new review decision.

Browse all records at /universities or inspect the dataset at /api/public/v1/universities.json.