Theme

Approved AI tools policy claim records

Published claim records where the visible claim or original evidence concerns named AI tools, licensed services, approved tools, procurement, or security review. This page surfaces existing public claim text and evidence context. It does not add new policy claims or infer rules that are not visible in the linked records.

ThemeApproved AI toolsPublic JSON/api/public/v1/universities.json
36

matching university records

282

matching source-backed claims

291

evidence records

222

official sources on matching records

Matching claim records

Visible claim and source context from public university records.

The University of Tokyo

20 matching claims from 6 official sources.

Last checkedMay 9, 2026Review: Agent reviewedPublic JSON
Claim typeAi Tool TreatmentReview: Agent reviewed

University of Tokyo will not uniformly prohibit generative AI tools like ChatGPT in education; instead, it actively explores their potential while continuing dialogue on practical knowledge and long-term impact.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAi Tool TreatmentReview: Agent reviewed

UTokyo does not enforce a blanket prohibition on generative AI tools; it actively explores their potential and provides practical guidance.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAcademic IntegrityReview: Agent reviewed

UTokyo states it is unacceptable to present AI-generated text as one's own when submitting class assignments.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

University of California, Berkeley (UCB)

20 matching claims from 5 official sources.

Last checkedMay 6, 2026Review: Agent reviewedPublic JSON
Claim typeOtherReview: Agent reviewed

UC Berkeley warns that individuals who accept click-through agreements for AI tools (such as OpenAI and ChatGPT terms of use) without delegated signature authority may face personal liability, including responsibility for compliance with terms and conditions.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeOtherReview: Agent reviewed

UC Berkeley requires researchers to comply with varying license agreement terms before using or training AI tools with materials acquired from library-licensed resources or databases. Violations can result in personal liability and campus-wide loss of access to critical research resources.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeOtherReview: Agent reviewed

UC Berkeley states that use of generative AI tools should be consistent with UC Berkeley's Principles of Community and the UC Principles of Responsible AI.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Australian National University (ANU)

16 matching claims from 12 official sources.

Last checkedMay 10, 2026Review: Agent reviewedPublic JSON
Claim typeSource StatusReview: Agent reviewed

ANU approved six institutional AI principles via Academic Board in June 2023, covering excellence/integrity, research engagement, clear guidance, AI literacy, access/privacy/security, and collaborative policy development.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeProcurementReview: Agent reviewed

ANU requires that only university-approved AI solutions/software be used to ensure appropriate data governance, information security, and licensing.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAcademic IntegrityReview: Agent reviewed

ANU Law School prohibits using generative AI to draft assessment content; all submitted work must be the student's own independent and original work.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

The University of New South Wales (UNSW Sydney)

15 matching claims from 7 official sources.

Last checkedMay 6, 2026Review: Agent reviewedPublic JSON
Claim typeOtherReview: Agent reviewed

Under UNSW's 'No Assistance' level, students are not permitted to use any generative AI tools, software, or service to search for or generate information or answers.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeOtherReview: Agent reviewed

UNSW's first key principle for AI in assessment requires staff to be honest and transparent about the use of any AI tool where it would reasonably be expected that use of the tool would be disclosed.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeOtherReview: Agent reviewed

UNSW only authorises the use of Turnitin's AI Writing Detection Tool for detecting improper AI use in student work; UNSW IT has not approved other detection tools due to privacy and accuracy concerns.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

The University of Melbourne

13 matching claims from 5 official sources.

Last checkedMay 6, 2026Review: Agent reviewedPublic JSON
Claim typeOtherReview: Agent reviewed

At the University of Melbourne, using GenAI tools to produce work submitted for assessment without acknowledgement constitutes academic misconduct under cl. 4.13 of the Student Academic Integrity Policy (MPF1310).

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeOtherReview: Agent reviewed

A high AI score in Turnitin's writing detection report at the University of Melbourne is not proof that academic misconduct has taken place and does not on its own constitute grounds for making an allegation of academic misconduct.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeOtherReview: Agent reviewed

University of Melbourne assessment materials and teaching materials constitute University IP and should never be tested on third-party external GenAI platforms such as ChatGPT; any such testing must be done only within the University's secure SparkAI platform.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

University of Chicago

13 matching claims from 4 official sources.

Last checkedMay 5, 2026Review: Agent reviewedPublic JSON
Claim typeAi Tool TreatmentReview: Agent reviewed

The University of Chicago maintains a page listing approved, restricted, and unauthorized AI tools, with use conditions and review information for the university community.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typePrivacyReview: Agent reviewed

At the University of Chicago, use of confidential data with publicly available generative AI tools is prohibited without prior security and privacy review. This includes personally identifiable employee data, FERPA-covered student data, HIPAA-covered patient data, and research that is not yet publicly available.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typePrivacyReview: Agent reviewed

At the University of Chicago, generative AI systems, applications, and software products that process, analyze, or move confidential data require a security review before they are acquired, even if the software is free.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

The Chinese University of Hong Kong (CUHK)

12 matching claims from 7 official sources.

Last checkedMay 10, 2026Review: Agent reviewedPublic JSON
Claim typeAcademic IntegrityReview: Agent reviewed

At CUHK, improper or unauthorized use of AI tools in learning activities and assessments constitutes academic dishonesty and is subject to penalties including failure grade, suspension, or termination of studies.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAcademic IntegrityReview: Agent reviewed

CUHK requires students to declare in each assignment that they have read and understood the University's policy on AI use, complied with course teacher instructions on AI tools, and consent to AI content detection software review.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

CUHK defines four approaches to AI use in courses: (1) prohibit all use, (2) use only with prior permission, (3) use only with explicit acknowledgement, and (4) free use without acknowledgement requirement.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Columbia University

11 matching claims from 8 official sources.

Last checkedMay 10, 2026Review: Agent reviewedPublic JSON
Claim typeAi Tool TreatmentReview: Agent reviewed

CUIMC provides HIPAA-compliant versions of ChatGPT Education and Microsoft Copilot as approved AI chatbot tools; workforce members must use CUIMC-issued accounts for compliance.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typePrivacyReview: Agent reviewed

CUIMC restricts sensitive data (PHI, RHI, PII) use with AI to HIPAA-compliant platforms only (ChatGPT Education, approved Microsoft Copilot, CHAT with compliant models); research use requires IRB approval.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAi Tool TreatmentReview: Agent reviewed

Columbia Law School permits students to use generative AI for studying, brainstorming, and identifying typographical errors, but not for writing, editing, revising, or translating text.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

University of Pennsylvania

11 matching claims from 6 official sources.

Last checkedMay 6, 2026Review: Agent reviewedPublic JSON
Claim typeOtherReview: Agent reviewed

Penn requires all community members (educators, staff, researchers, and students) to be transparent about the use of AI and to disclose when a work product was created wholly or partially using an AI tool.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeOtherReview: Agent reviewed

Penn provides several licensed AI tools to its community, including Copilot Chat (Basic, free), Adobe Express (free), ChatGPT-EDU (purchase required), M365 Copilot Premium (purchase required), Gemini for Google Workspace (purchase required), Google NotebookLM (purchase required), Grammarly Pro (purchase required), Snowflake Data Analytics (purchase required), and Zoom AI Companion (free).

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeOtherReview: Agent reviewed

Users of AI at Penn are accountable for AI-generated content and should validate its accuracy with trusted first-party sources, being wary of misinformation or hallucinations.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Seoul National University

10 matching claims from 7 official sources.

Last checkedMay 10, 2026Review: Agent reviewedPublic JSON
Claim typeAi Tool TreatmentReview: Agent reviewed

SNU's AI Guidelines do not uniformly restrict or prohibit AI use; they establish standards to support rational judgment and ethical, creative use based on autonomy and trust.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

SNU requires instructors to communicate AI tool usage policies (permitted/prohibited scope and reporting methods) through syllabi, and students must comply.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeResearchReview: Agent reviewed

SNU Library cites COPE's position that AI tools cannot be listed as authors because they cannot take responsibility for research results and lack legal personality.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

University of Cambridge

9 matching claims from 6 official sources.

Last checkedMay 5, 2026Review: Agent reviewedPublic JSON
Claim typePrivacyReview: Agent reviewed

Staff must avoid inputting confidential, sensitive or personal information into GenAI tools unless warranted and only in accordance with guidance. Inputting data into a free or unlicensed GenAI tool could be considered equivalent to putting it into the public domain, signifying a potential personal data breach.

Evidence records: 2. Original evidence remains canonical on the linked university record and public JSON.

Claim typeProcurementReview: Agent reviewed

The University's standard licensed GenAI tools are Microsoft 365 Copilot, Google Gemini, and Google NotebookLM. Use of other licensed GenAI tools is not prohibited but must be procured in accordance with applicable procurement policy, including completion of risk assessments such as DPIAs and/or ISRAs. The public, free versions of Copilot, Gemini and NotebookLM must not be used for University activities.

Evidence records: 2. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAi Tool TreatmentReview: Agent reviewed

All GenAI outputs must be thoroughly evaluated by a human being before they are used. Use of GenAI must be acknowledged if it makes a significant and unrevised contribution to a substantive or impactful piece of work. Staff are responsible for ensuring any use of GenAI is conducted reasonably, lawfully and in conjunction with relevant University policies.

Evidence records: 2. Original evidence remains canonical on the linked university record and public JSON.

Cornell University

8 matching claims from 6 official sources.

Last checkedMay 6, 2026Review: Agent reviewedPublic JSON
Claim typeOtherReview: Agent reviewed

Cornell's committee report states that any information educators are obligated to keep private under FERPA or HIPAA should not be shared with generative AI tools or uploaded to third-party AI vendors.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeOtherReview: Agent reviewed

Cornell's committee report states that original research or content owned by Cornell University, students, or employees should not be uploaded to AI tools, as it can become part of the AI tool's training data.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeOtherReview: Agent reviewed

Cornell's IT guidelines state that users are accountable for their work regardless of the tools used to produce it, and when using generative AI tools must always verify information for errors and biases and exercise caution to avoid copyright infringement.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Harvard University

8 matching claims from 12 official sources.

Last checkedMay 6, 2026Review: Agent reviewedPublic JSON
Claim typeOtherReview: Agent reviewed

University-wide: Level 2 and above confidential data (including non-public research data, finance, HR, student records, medical information) should not be entered into publicly-available generative AI tools. Such data may only be entered into generative AI tools that have been assessed and approved by Harvard's Information Security and Data Privacy office.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeProcurementReview: Agent reviewed

University-wide: All vendor generative AI tools not currently offered by HUIT must be assessed for risk by Harvard's Information Security and Data Privacy office prior to use in Harvard work. Contact HUIT before procuring any generative AI tool.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typePrivacyReview: Agent reviewed

University-wide: AI meeting assistants (AI note takers or bots) should not be used in Harvard meetings, with the exception of approved tools with contractual protections including enterprise agreements with appropriate security and privacy protections, or tools as part of limited HUIT-directed pilot programs.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

King's College London

8 matching claims from 6 official sources.

Last checkedMay 10, 2026Review: Agent reviewedPublic JSON
Claim typeAi Tool TreatmentReview: Agent reviewed

King's College London does not ban the use of generative AI tools by students.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAcademic IntegrityReview: Agent reviewed

King's College London does not require students to reference generative AI as an authoritative source in the reference list, but does require explicit acknowledgement of AI tool use in coursework.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAi Tool TreatmentReview: Agent reviewed

Microsoft Copilot is available to all King's College London students via their KCL Microsoft account and comes with commercial data protection under the university's enterprise license.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Massachusetts Institute of Technology (MIT)

8 matching claims from 4 official sources.

Last checkedMay 6, 2026Review: Agent reviewedPublic JSON
Claim typeOtherReview: Agent reviewed

No generative AI tools, including those licensed by IS&T, are approved for use with High Risk MIT information. Additionally, MIT does not recommend using publicly available GenAI tools not subject to an Institute licensing agreement for MIT research and educational activities, even with Low Risk or Medium Risk information.

Evidence records: 2. Original evidence remains canonical on the linked university record and public JSON.

Claim typeOtherReview: Agent reviewed

Use of generative AI tools at MIT must comply with all applicable federal and state laws and orders (including FERPA, HIPAA, Massachusetts Data Protection Standards, export control laws, and the Executive Order on Safe, Secure, and Trustworthy Development and Use of AI), Institute policies (including 10.1 Academic and Research Misconduct, 11.0 Privacy and Disclosure of Personal Information, and 13.0 Information Policies), Information Protection guidelines, and the Institute's Written Information Security Program (WISP), plus any additional policies established by the user's department, lab, center, or institute (DLCI).

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAcademic IntegrityReview: Agent reviewed

MIT advises community members to disclose the use of generative AI tools for all academic, educational, and research-related uses, and not to publish research results relying on AI-generated content without disclosing the nature of such use.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Stanford University

8 matching claims from 13 official sources.

Last checkedMay 6, 2026Review: Agent reviewedPublic JSON
Claim typeAcademic IntegrityReview: Agent reviewed

Stanford's BCA issued guidance on generative AI use, and the Office of Community Standards recommends that instructors give advance notice to students when using AI detection software.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

For Stanford Graduate School of Business (GSB) MBA and MSx courses, instructors may not ban student use of AI tools for take-home coursework, including assignments and exams. Instructors may choose whether to allow AI for in-class work. For PhD and undergraduate courses, GSB follows the university-wide Generative AI Policy Guidance from the Office of Community Standards.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

Stanford School of Medicine MD and MSPA programs have a formal AI policy: students may use AI for learning, clarification, and grammar/style editing unless contrary to assignment instructions. AI use for closed-book exams or assignments where internet is restricted is prohibited unless explicitly authorized by faculty. Students are responsible for all AI-generated content they submit, must disclose and cite substantial AI contributions, and violations may result in disciplinary action.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

The University of Manchester

8 matching claims from 6 official sources.

Last checkedMay 10, 2026Review: Agent reviewedPublic JSON
Claim typeAi Tool TreatmentReview: Agent reviewed

The University of Manchester does not ban generative AI. The university's position is that when used appropriately, AI tools have the potential to enhance teaching and learning, and can support inclusivity and accessibility.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAi Tool TreatmentReview: Agent reviewed

The University of Manchester is the world's first university to deploy equitable Microsoft 365 Copilot access and training across its entire community, with 65,000 colleagues and students receiving the full M365 Copilot suite.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typePrivacyReview: Agent reviewed

Copilot Chat is available to everyone with a University of Manchester account. The university has a contractual agreement with Microsoft ensuring prompts and uploaded files are private, protected by the same security and encryption as emails and OneDrive. The AI system does not learn from user prompts or data.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

University of Oxford

8 matching claims from 6 official sources.

Last checkedMay 6, 2026Review: Agent reviewedPublic JSON
Claim typeAcademic IntegrityReview: Agent reviewed

Staff setting summative assessment must: declare whether/how students can use AI; review assessment design for alignment with permitted AI use; ensure equality of baseline AI tool provision where authorised; specify declaration forms for student AI use; only identify suspected unauthorised AI use through marking or university-endorsed detection tools (none currently endorsed); and handle misconduct under usual disciplinary regulations.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeOtherReview: Agent reviewed

All cloud-based generative AI tools must be subject to a security risk assessment before being used with University information. Free and open-source services generally cannot complete a full assessment and should not be used for confidential information.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeOtherReview: Agent reviewed

ChatGPT Edu and Google Gemini, when licensed via the AI Competency Centre, have been approved for processing of Confidential University data by the Information Security team. University data processed through these licensed platforms will not be used to train AI models. Confidential data must only be used with the University's approved, SSO-protected platforms.

Evidence records: 3. Original evidence remains canonical on the linked university record and public JSON.

University of Michigan-Ann Arbor

7 matching claims from 8 official sources.

Last checkedMay 10, 2026Review: Agent reviewedPublic JSON
Claim typeAi Tool TreatmentReview: Agent reviewed

U-M ITS offers a generative AI platform available to all active U-M faculty, staff, and students on the Ann Arbor, Flint, and Dearborn campuses and Michigan Medicine, with service offerings described as equitable and accessible.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typePrivacyReview: Agent reviewed

U-M requires using approved ITS AI Services for university data. Only data classified as Low may be used with AI services lacking a U-M contract or data agreement.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeSecurity ReviewReview: Agent reviewed

U-M requires that AI-generated computer code is always reviewed by a human, with professionally trained peer code reviews for applications handling Restricted or High data.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Monash University

6 matching claims from 9 official sources.

Last checkedMay 10, 2026Review: Agent reviewedPublic JSON
Claim typeResearchReview: Agent reviewed

Thesis examiners are not permitted to use Generative AI technologies (such as ChatGPT) during the thesis examination process to support, prepare, or write their examiners' report.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typePrivacyReview: Agent reviewed

Students should not upload personal, sensitive, copyrighted, or licensed material to AI tools, as many AI tools cannot guarantee privacy, strong data security, or the protection of intellectual property.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typePrivacyReview: Agent reviewed

Monash enterprise systems such as Copilot have enterprise data protections identified under the green shield in Monash Copilot. Students should use their Monash email address to access Microsoft Copilot which provides data protection for Monash.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

National University of Singapore (NUS)

6 matching claims from 3 official sources.

Last checkedMay 6, 2026Review: Agent reviewedPublic JSON
Claim typeAcademic IntegrityReview: Agent reviewed

NUS policy states that verdicts from AI detection tools are not admissible as conclusive evidence in disciplinary processes to charge students with academic dishonesty or to penalize student work.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAcademic IntegrityReview: Agent reviewed

NUS policy states that representing AI output as one's own work without acknowledgement is plagiarism; students who submit AI-generated work without acknowledging its use can be sanctioned for academic dishonesty.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

NUS requires prior approval from Head of Department or relevant Deanery before using AI tools to provide instruction, feedback, or marks to students, submitted via an AI Risk Assessment.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

California Institute of Technology (Caltech)

5 matching claims from 2 official sources.

Last checkedMay 5, 2026Review: Agent reviewedPublic JSON
Claim typeTeachingReview: Agent reviewed

Caltech admissions permits applicants to use AI tools like Grammarly or Microsoft Editor for grammar and spelling review of completed essays, to generate brainstorming questions or exercises, and to research the college application process.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

In Caltech's Humanities and Social Sciences (HSS) division, students may use generative AI tools only in ways explicitly allowed by the course instructor in the course materials. Any usage not specifically allowed should be assumed to be disallowed.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

Caltech admissions AI guidelines were approved by the Undergraduate Faculty Admissions and Graduate Studies committees for the Fall 2026 application cycle, and may evolve for future cycles.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Johns Hopkins University

5 matching claims from 7 official sources.

Last checkedMay 10, 2026Review: Needs reviewPublic JSON
Claim typeAcademic IntegrityReview: Needs review

JHU requires all uses of AI tools in any assignment to be disclosed, with FERPA guidelines referenced for data protection.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Needs review

JHU provides official syllabus statement templates, including options that prohibit students from using ChatGPT or other AI tools to generate written content for assignments.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeProcurementReview: Needs review

JHU is working to ensure AI tools procured on behalf of the university meet privacy and security standards.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Nanyang Technological University, Singapore (NTU Singapore)

5 matching claims from 3 official sources.

Last checkedMay 5, 2026Review: Agent reviewedPublic JSON
Claim typeResearchReview: Agent reviewed

NTU states that generative AI should not be listed as an author of any paper with NTU affiliation, or as a Principal Investigator, Co-PI, or collaborator in research proposals.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeResearchReview: Agent reviewed

NTU states that the use of generative AI beyond basic spelling and grammar checks should be acknowledged and cited in research outputs, publications, and presentations.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAcademic IntegrityReview: Agent reviewed

NTU states that misrepresenting AI-generated content as one's own work is considered academic misconduct under the 2025 NTU Academic Integrity Handbook.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Northwestern University

5 matching claims from 6 official sources.

Last checkedMay 10, 2026Review: Agent reviewedPublic JSON
Claim typePrivacyReview: Agent reviewed

Northwestern guidance states that faculty, staff, students, and affiliates should not enter institutional data into any generative AI tools that have not been validated by the University for appropriate use and have explicit permission of the data provider.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typePrivacyReview: Agent reviewed

Northwestern classifies data into four levels. Only Level 1 (non-confidential, public) data may be uploaded to publicly available generative AI tools. Data above Level 1 requires tools approved through Northwestern IT procurement and security review.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAi Tool TreatmentReview: Agent reviewed

Microsoft Copilot is Northwestern's primary approved generative AI tool. All students, faculty, and staff have access to free Copilot Chat, with full Copilot for Microsoft 365 available as a paid subscription.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

The University of Edinburgh

5 matching claims from 6 official sources.

Last checkedMay 10, 2026Review: Agent reviewedPublic JSON
Claim typeAi Tool TreatmentReview: Agent reviewed

The University of Edinburgh does not ban the use of generative AI by students, though its use is restricted for much assessed work.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAi Tool TreatmentReview: Agent reviewed

ELM (Edinburgh access to Language Models) is the University of Edinburgh's AI innovation platform and a central gateway providing safer access to generative AI through large language models.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAcademic IntegrityReview: Agent reviewed

At the University of Edinburgh, staff presenting AI-generated content as their own original work, uploading personal data or confidential information to external AI tools, and relying on AI detection tools are listed as unacceptable uses.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

University of British Columbia

5 matching claims from 10 official sources.

Last checkedMay 10, 2026Review: Agent reviewedPublic JSON
Claim typePrivacyReview: Agent reviewed

UBC says instructors or teaching assistants cannot require students to use GenAI or any other technology tool that requires sharing personal information unless the tool has undergone a UBC Privacy Impact Assessment review and been approved for use with personal information.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typePrivacyReview: Agent reviewed

Do not enter personal information into any generative AI tool that has not been through UBC's FIPPA compliance assessment (PIA), as to do so may be a breach of privacy.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

UBC recognizes generative AI as a tool to assist in tasks, not a replacement for human creativity and judgment, and encourages experimentation within ethical and responsible use boundaries.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Imperial College London

4 matching claims from 14 official sources.

Last checkedMay 6, 2026Review: Agent reviewedPublic JSON
Claim typePrivacyReview: Agent reviewed

Imperial's dAIsy AI platform uses University SSO authentication with auditing. Prompts and metadata are logged for operational monitoring, and AI model providers are configured not to train on user data. Users' prompts and responses are not used to train external AI models. dAIsy is approved for use with unrestricted data within Imperial's secure infrastructure.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

Students should include a statement acknowledging their use of generative AI tools for all assessed work, specifying the tool name and version, publisher, URL, a brief description of how it was used, and confirmation that the work is their own. Further requirements such as prompts used, date of output, the output obtained, and how it was modified may also be required by individual departments.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeResearchReview: Agent reviewed

Research at Imperial that involves people, personal data, or sensitive topics may require ethics approval, a Data Protection Impact Assessment (DPIA), and data-governance controls before using any AI tool. Researchers must verify whether their use of AI in research requires special approval, particularly when uploading private or confidential research data.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

The Hong Kong University of Science and Technology

4 matching claims from 7 official sources.

Last checkedMay 10, 2026Review: Agent reviewedPublic JSON
Claim typeAi Tool TreatmentReview: Agent reviewed

HKUST guidelines state that users shall not use generative AI tools for unlawful, harmful, or malicious activities, including fraud, harassment, defamation, or infringement of rights.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAi Tool TreatmentReview: Agent reviewed

HKUST provides approved generative AI tools including OpenWebUI, HKUST GenAI Platform, Google Gemini Enterprise, Microsoft Copilot Chat, and Microsoft 365 Copilot.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typePrivacyReview: Agent reviewed

HKUST advises users not to input confidential, sensitive, or personal data into generative AI tools unless data is desensitized.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

The University of Hong Kong

4 matching claims from 3 official sources.

Last checkedMay 5, 2026Review: Agent reviewedPublic JSON
Claim typeSource StatusReview: Agent reviewed

HKU's Guidelines on the Responsible Use of Generative AI in Research were formally approved by the Senate on September 23, 2025 and are now in effect.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeResearchReview: Agent reviewed

HKU states that researchers should clearly disclose generative AI tool usage in research outputs, publications, and presentations, including the type of GenAI used, data sources, and potential limitations.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeResearchReview: Agent reviewed

HKU holds researchers responsible for the outputs generated by generative AI and their implications; GenAI should be used as a support tool, not a substitute for critical analysis and human expertise.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

ETH Zurich

3 matching claims from 5 official sources.

Last checkedMay 5, 2026Review: Agent reviewedPublic JSON
Claim typeAi Tool TreatmentReview: Agent reviewed

ETH Zurich recommends Microsoft Copilot, Google Gemini, and NotebookLM for teaching purposes, as they offer data-protected access via ETH accounts where personal data is not used for training models.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typePrivacyReview: Agent reviewed

Students must refrain from disclosing copyrighted, private, or confidential information to commercial GenAI clients unless expressly permitted, and must respect privacy and copyright of content they work with.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

ETH Zurich requires transparency about GenAI use: students must declare which tools they used and for which parts of their work; lecturers must communicate when GenAI use is permitted and make their own GenAI use visible.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Peking University

3 matching claims from 1 official source.

Last checkedMay 6, 2026Review: Agent reviewedPublic JSON
Claim typeOtherReview: Agent reviewed

Peking University's AI use guidelines apply to faculty, students, researchers, and administrators who use generative AI or other AI-assisted tools in teaching, research, and management activities.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeOtherReview: Agent reviewed

Peking University distinguishes two levels of AI involvement in research: instrumental assistance (using AI as a tool to handle routine or repetitive tasks) and replacement completion (using AI to independently complete tasks involving core intellectual contribution and creative labor).

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeOtherReview: Agent reviewed

Peking University's AI Scientific Integrity Platform synthesizes AI use policies from 18 domestic and international sources, including Chinese government agencies (MOST, NSFC), Chinese universities (Fudan, Nanjing, Sichuan), international bodies (EU Commission, NIH), and universities (Harvard, Yale, Cambridge, UCL, Oxford, MIT).

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

The University of Queensland

3 matching claims from 5 official sources.

Last checkedMay 10, 2026Review: Agent reviewedPublic JSON
Claim typeAcademic IntegrityReview: Agent reviewed

UQ has disabled the Turnitin AI writing indicator functionality for all assessments from Semester 2, 2025, citing that AI detection tools are flawed and unreliable.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAi Tool TreatmentReview: Agent reviewed

Microsoft Copilot Chat is UQ's enterprise AI tool, available to UQ staff and students, and the UQ Library says it provides a higher level of data security and privacy than other AI tools.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

UQ says students must acknowledge where they used AI in assessment, including direct quotes or paraphrases of AI-generated content and use of AI tools for summarising, brainstorming, planning, editing, or proofreading.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

UCL

3 matching claims from 2 official sources.

Last checkedMay 5, 2026Review: Agent reviewedPublic JSON
Claim typeAcademic IntegrityReview: Agent reviewed

UCL uses a 3-category assessment framework for GenAI: Category 1 requires own work only; Category 2 permits GenAI with acknowledgement; Category 3 includes essential GenAI use as part of the assessment.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAi Tool TreatmentReview: Agent reviewed

UCL designates Microsoft Copilot as its approved GenAI tool due to its enhanced data protection, positioning it as a more secure alternative to other GenAI services.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeTeachingReview: Agent reviewed

UCL provides Studiosity, a GenAI-powered service available 24/7 to all current students at all levels of study, to support academic writing and referencing skills.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Tsinghua University

2 matching claims from 2 official sources.

Last checkedMay 6, 2026Review: Agent reviewedPublic JSON
Claim typeOtherReview: Agent reviewed

Tsinghua University's Guiding Principles establish five core principles for AI in education: principal responsibility (AI as auxiliary tool, teachers and students as primary agents), compliance and integrity, data security, prudence and critical thinking, and fairness and inclusiveness.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeOtherReview: Agent reviewed

Tsinghua University affirms that AI must remain an auxiliary tool and that teachers and students are the primary agents in teaching and learning (principal responsibility principle).

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Institut Polytechnique de Paris

1 matching claim from 3 official sources.

Last checkedMay 10, 2026Review: Agent reviewedPublic JSON
Claim typeTeachingReview: Needs review

École polytechnique (member school) provides teaching resources for AI integration including detection tools (Turnitin, AI Text Classifier, GPTZero) and curated expert articles on generative AI in higher education.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Data and advice boundaries

Theme pages expose index slices, not new conclusions.

  • Public pages and public JSON should remain consistent because both are built from the promoted public release dataset.
  • Original-language evidence is canonical. Translations and display summaries are auxiliary.
  • Confidence is separate from reviewState; reviewState describes workflow status.
  • Tracker metadata is open licensed. Official source documents, page text, PDFs, and other source materials retain their original rights and terms.
  • This tracker is not legal advice, not academic integrity advice, and not an official university statement unless a linked source is the university's own official page.
  • Theme matching is based on visible public claim and evidence text; it is not a new review decision.

Browse all records at /universities or inspect the dataset at /api/public/v1/universities.json.