Theme

Which universities mention AI privacy or data-entry rules?

Published claim records where the visible claim or original evidence mentions privacy, personal data, confidential information, sensitive data, data entry, FERPA, GDPR, security, or information-protection rules. This page surfaces existing public claim text and evidence context. It does not add new policy claims or infer rules that are not visible in the linked records.

ThemePrivacy and data entryMatching records58Public JSON/api/public/v1/universities.json
58

matching university records

174

matching source-backed claims

180

evidence records

344

official sources on matching records

Citation-ready summary

Short answer for researchers, journalists, and AI answer engines.

University AI Policy Tracker currently indexes 58 public university records with 174 source-backed claims related to privacy and data entry, supported by 180 evidence records and 344 official source attributions. This page is a public dataset slice generated from promoted claim/evidence records; it does not create new policy conclusions. Original-language evidence remains canonical, and each linked university record exposes review state, confidence, source URLs, snapshot hashes, and public JSON.

Theme pages are search and citation aids over promoted public records. They are not official university statements, legal advice, academic integrity advice, or a new review decision.

Matching claim records

Visible claim and source context from public university records.

University of Chicago

13 matching claims from 4 official sources.

Last checkedMay 5, 2026Review: Agent reviewedPublic JSON
Claim typeAi Tool TreatmentReview: Agent reviewed

The University of Chicago maintains a page listing approved, restricted, and unauthorized AI tools, with use conditions and review information for the university community.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typePrivacyReview: Agent reviewed

At the University of Chicago, use of confidential data with publicly available generative AI tools is prohibited without prior security and privacy review. This includes personally identifiable employee data, FERPA-covered student data, HIPAA-covered patient data, and research that is not yet publicly available.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typePrivacyReview: Agent reviewed

At the University of Chicago, generative AI systems, applications, and software products that process, analyze, or move confidential data require a security review before they are acquired, even if the software is free.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Australian National University (ANU)

8 matching claims from 12 official sources.

Last checkedMay 10, 2026Review: Agent reviewedPublic JSON
Claim typeSource StatusReview: Agent reviewed

ANU approved six institutional AI principles via Academic Board in June 2023, covering excellence/integrity, research engagement, clear guidance, AI literacy, access/privacy/security, and collaborative policy development.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typePrivacyReview: Agent reviewed

ANU academic staff are not permitted to upload student data or academic work to generative AI platforms.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typePrivacyReview: Agent reviewed

ANU prohibits using AI to collect, use, store, or disclose personal information without express consent from the individual(s).

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Columbia University

6 matching claims from 8 official sources.

Last checkedMay 10, 2026Review: Agent reviewedPublic JSON
Claim typePrivacyReview: Agent reviewed

CUIMC restricts sensitive data (PHI, RHI, PII) use with AI to HIPAA-compliant platforms only (ChatGPT Education, approved Microsoft Copilot, CHAT with compliant models); research use requires IRB approval.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typePrivacyReview: Agent reviewed

Columbia Law School requires all generative AI use to comply with university data protection policy; confidential or personal information must not be shared with AI tools unless retention and training use is disabled.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAi Tool TreatmentReview: Agent reviewed

As of March 2026, Google Gemini, NotebookLM, and Anthropic Claude are not approved for use with sensitive data at CUIMC; they may only be used with non-sensitive, non-confidential data.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

University of California, Berkeley (UCB)

6 matching claims from 5 official sources.

Last checkedMay 6, 2026Review: Agent reviewedPublic JSON
Claim typeOtherReview: Agent reviewed

UC Berkeley requires users to use UC-licensed AI tools rather than individual consumer accounts to benefit from UC's contractual data protections when working with information more sensitive than Protection Level P1.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeOtherReview: Agent reviewed

UC Berkeley prohibits entering personal, confidential, proprietary, or otherwise sensitive information classified as Protection Level P2, P3, or P4 into generative AI tools, unless specifically allowed under UC's negotiated contracts with AI providers.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeOtherReview: Agent reviewed

UC Berkeley prohibits entering FERPA-protected student records, non-public instructional materials, and proprietary or unpublished research into generative AI tools.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

University of Cambridge

6 matching claims from 6 official sources.

Last checkedMay 5, 2026Review: Agent reviewedPublic JSON
Claim typePrivacyReview: Agent reviewed

Staff must avoid inputting confidential, sensitive or personal information into GenAI tools unless warranted and only in accordance with guidance. Inputting data into a free or unlicensed GenAI tool could be considered equivalent to putting it into the public domain, signifying a potential personal data breach.

Evidence records: 2. Original evidence remains canonical on the linked university record and public JSON.

Claim typeProcurementReview: Agent reviewed

The University's standard licensed GenAI tools are Microsoft 365 Copilot, Google Gemini, and Google NotebookLM. Use of other licensed GenAI tools is not prohibited but must be procured in accordance with applicable procurement policy, including completion of risk assessments such as DPIAs and/or ISRAs. The public, free versions of Copilot, Gemini and NotebookLM must not be used for University activities.

Evidence records: 2. Original evidence remains canonical on the linked university record and public JSON.

Claim typePrivacyReview: Agent reviewed

Data input into the University's licensed versions of Copilot, Gemini and NotebookLM is not used to train those tools. Inputting data into free or unlicensed GenAI tools could result in data being used for training, which may not be a lawful use of personal data.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Harvard University

5 matching claims from 12 official sources.

Last checkedMay 6, 2026Review: Agent reviewedPublic JSON
Claim typeOtherReview: Agent reviewed

University-wide: Level 2 and above confidential data (including non-public research data, finance, HR, student records, medical information) should not be entered into publicly-available generative AI tools. Such data may only be entered into generative AI tools that have been assessed and approved by Harvard's Information Security and Data Privacy office.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeProcurementReview: Agent reviewed

University-wide: All vendor generative AI tools not currently offered by HUIT must be assessed for risk by Harvard's Information Security and Data Privacy office prior to use in Harvard work. Contact HUIT before procuring any generative AI tool.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typePrivacyReview: Agent reviewed

University-wide: AI meeting assistants (AI note takers or bots) should not be used in Harvard meetings, with the exception of approved tools with contractual protections including enterprise agreements with appropriate security and privacy protections, or tools as part of limited HUIT-directed pilot programs.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Seoul National University

5 matching claims from 7 official sources.

Last checkedMay 10, 2026Review: Agent reviewedPublic JSON
Claim typeResearchReview: Agent reviewed

SNU's AI Guidelines for research require cross-verification of AI outputs for errors/bias, protection of research data and confidential information, and documentation of AI use for research reproducibility.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAcademic IntegrityReview: Agent reviewed

SNU's AI Guidelines require transparent disclosure of AI use, fact and source verification, copyright/privacy/information security compliance, bias correction, and awareness of accountability.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typePrivacyReview: Agent reviewed

SNU Library states that most e-resource subscription agreements only permit viewing, downloading, and printing; using resources for AI training/analysis or uploading to third-party AI services requires separate authorization.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

The University of Tokyo

5 matching claims from 6 official sources.

Last checkedMay 9, 2026Review: Agent reviewedPublic JSON
Claim typePrivacyReview: Agent reviewed

UTokyo warns students to never input confidential information, personal information, or unpublished research results into AI tools, as the information might be leaked or used for AI training.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typePrivacyReview: Agent reviewed

UTokyo requires instructors who allow AI use to explain associated risks to students: information leakage, data concentration in few companies, copyright concerns, and potential bias.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typePrivacyReview: Agent reviewed

UTokyo instructs faculty not to input exam questions directly into generative AI tools, as exams are highly confidential documents.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

The University of Warwick

5 matching claims from 5 official sources.

Last checkedMay 12, 2026Review: Agent reviewedPublic JSON
Claim typeSecurity ReviewReview: Agent reviewed

Warwick's AI Information Compliance Policy covers everyone with a contractual or implied relationship with the University and all information processed by the University.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typePrivacyReview: Agent reviewed

Warwick's AI Information Compliance Policy says certain data, including personal or confidential material, University intellectual property, and some copyrighted or third-party data, must not be put into AI software without prior approval.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeResearchReview: Agent reviewed

Warwick's AI in research guidance says its principles apply to all researchers and researchers must consider AI-related research risks including integrity, information security, and accountability risks.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

University of Pennsylvania

5 matching claims from 6 official sources.

Last checkedMay 6, 2026Review: Agent reviewedPublic JSON
Claim typeOtherReview: Agent reviewed

Penn provides several licensed AI tools to its community, including Copilot Chat (Basic, free), Adobe Express (free), ChatGPT-EDU (purchase required), M365 Copilot Premium (purchase required), Gemini for Google Workspace (purchase required), Google NotebookLM (purchase required), Grammarly Pro (purchase required), Snowflake Data Analytics (purchase required), and Zoom AI Companion (free).

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeOtherReview: Agent reviewed

Penn users should not input moderate or high-risk Penn data (per the Penn Data Risk Classification) or intellectual property into AI tools without careful consideration of data use policies, a protective contract, and review by Penn's Privacy Office and Office of Information Security.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeOtherReview: Agent reviewed

Penn's Office of Audit, Compliance & Privacy mandates that users of publicly available (unlicensed) AI tools must not enter any information that could identify a student, including names, ID numbers, email addresses, or detailed descriptions of student work or engagement that could be identifiable to others.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Monash University

4 matching claims from 9 official sources.

Last checkedMay 10, 2026Review: Agent reviewedPublic JSON
Claim typeResearchReview: Agent reviewed

Thesis examiners are not permitted to use Generative AI technologies (such as ChatGPT) during the thesis examination process to support, prepare, or write their examiners' report.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typePrivacyReview: Agent reviewed

Students should not upload personal, sensitive, copyrighted, or licensed material to AI tools, as many AI tools cannot guarantee privacy, strong data security, or the protection of intellectual property.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typePrivacyReview: Agent reviewed

Monash enterprise systems such as Copilot have enterprise data protections identified under the green shield in Monash Copilot. Students should use their Monash email address to access Microsoft Copilot which provides data protection for Monash.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Northwestern University

4 matching claims from 6 official sources.

Last checkedMay 10, 2026Review: Agent reviewedPublic JSON
Claim typePrivacyReview: Agent reviewed

Northwestern guidance states that faculty, staff, students, and affiliates should not enter institutional data into any generative AI tools that have not been validated by the University for appropriate use and have explicit permission of the data provider.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typePrivacyReview: Agent reviewed

Northwestern classifies data into four levels. Only Level 1 (non-confidential, public) data may be uploaded to publicly available generative AI tools. Data above Level 1 requires tools approved through Northwestern IT procurement and security review.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAi Tool TreatmentReview: Agent reviewed

Microsoft Copilot is Northwestern's primary approved generative AI tool. All students, faculty, and staff have access to free Copilot Chat, with full Copilot for Microsoft 365 available as a paid subscription.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

University of Illinois Urbana-Champaign

4 matching claims from 6 official sources.

Last checkedMay 13, 2026Review: Agent reviewedPublic JSON
Claim typePrivacyReview: Agent reviewed

Illinois Enterprise GenAI guidance tells users to handle data used with AI according to legal, institutional, and ethical standards, including privacy laws.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typePrivacyReview: Agent reviewed

Illinois Enterprise GenAI guidance says personally identifiable information should be anonymized, removed, or obfuscated before processing with AI systems where possible.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeSecurity ReviewReview: Agent reviewed

Illinois Enterprise GenAI guidance for service providers includes security controls, audits, MFA, and data privacy compliance for AI systems and sensitive data.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

University of Michigan-Ann Arbor

4 matching claims from 8 official sources.

Last checkedMay 10, 2026Review: Agent reviewedPublic JSON
Claim typePrivacyReview: Agent reviewed

U-M requires using approved ITS AI Services for university data. Only data classified as Low may be used with AI services lacking a U-M contract or data agreement.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeSecurity ReviewReview: Agent reviewed

U-M requires that AI-generated computer code is always reviewed by a human, with professionally trained peer code reviews for applications handling Restricted or High data.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typePrivacyReview: Agent reviewed

U-M ITS AI Services include HIPAA safeguards and may be used with Protected Health Information (PHI).

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

University of Oxford

4 matching claims from 6 official sources.

Last checkedMay 6, 2026Review: Agent reviewedPublic JSON
Claim typeOtherReview: Agent reviewed

All cloud-based generative AI tools must be subject to a security risk assessment before being used with University information. Free and open-source services generally cannot complete a full assessment and should not be used for confidential information.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeOtherReview: Agent reviewed

ChatGPT Edu and Google Gemini, when licensed via the AI Competency Centre, have been approved for processing of Confidential University data by the Information Security team. University data processed through these licensed platforms will not be used to train AI models. Confidential data must only be used with the University's approved, SSO-protected platforms.

Evidence records: 3. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAcademic IntegrityReview: Agent reviewed

For PGR students, the following uses of generative AI are not permitted in summative assessments: substantive original writing by GenAI (verbatim or closely paraphrased for chapters or parts thereof) which constitutes plagiarism; using AI to produce plots or data visualisations directly from prompts; and entering private or confidential data into third-party AI tools.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

University of Texas at Austin

4 matching claims from 6 official sources.

Last checkedMay 13, 2026Review: Agent reviewedPublic JSON
Claim typePrivacyReview: Agent reviewed

UT Austin acceptable-use guidance says published university information may be used freely with AI tools, while controlled or confidential university information can be used only with university-managed AI tools covered by contracts that protect university data and disable web search functionality.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typePrivacyReview: Agent reviewed

UT Austin acceptable-use guidance says unauthorized AI tools are not approved for controlled or confidential university information, including student records subject to FERPA, health information, proprietary information, and other controlled or confidential data.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeSecurity ReviewReview: Agent reviewed

UT Austin acceptable-use guidance says the CISO must review AI tools before procurement, development, deployment, or use when the tools are intended to autonomously make, or be a controlling factor in making, consequential decisions.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Yale University

4 matching claims from 8 official sources.

Last checkedMay 10, 2026Review: Agent reviewedPublic JSON
Claim typePrivacyReview: Agent reviewed

Yale guidance says confidential, legally restricted, moderate-risk, and high-risk Yale data should not be entered into AI tools.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typePrivacyReview: Agent reviewed

Yale Poorvu Center guidance says classroom AI use must comply with FERPA and instructors cannot require students to create external accounts for tools Yale does not directly license.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typePrivacyReview: Agent reviewed

Yale describes Copilot Chat as not using conversations to train AI models or sharing data with OpenAI, while limiting high-risk data to Work search.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Brown University

3 matching claims from 6 official sources.

Last checkedMay 12, 2026Review: Agent reviewedPublic JSON
Claim typePrivacyReview: Agent reviewed

Brown OIT guidance says users should not enter Level 2 or 3 Brown data into publicly available or vendor-enabled AI tools unless Brown has a contract for a specific service that protects the data.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeSecurity ReviewReview: Agent reviewed

Brown OIT guidance says AI tool use is subject to the same policies as other information technology resources, including acceptable use, copyright, conduct, and contract review policies.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typePrivacyReview: Agent reviewed

Brown University Communications guidance for Brown communicators says not to input identifying personal information or proprietary information into AI tools.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Carnegie Mellon University

3 matching claims from 6 official sources.

Last checkedMay 12, 2026Review: Agent reviewedPublic JSON
Claim typePrivacyReview: Agent reviewed

CMU Computing Services guidance says public AI tools should not be used with student data, confidential research, or sensitive administrative tasks.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAi Tool TreatmentReview: Agent reviewed

CMU Computing Services lists protected AI tools available at CMU and states that when users sign in with Andrew ID and password, each listed tool is FERPA-compliant and will not use data to train AI models.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAi Tool TreatmentReview: Agent reviewed

CMU Eberly Center guidance identifies a growing list of CMU-vetted generative AI tools that are FERPA compliant for teaching and learning when used as instructed.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Cornell University

3 matching claims from 6 official sources.

Last checkedMay 6, 2026Review: Agent reviewedPublic JSON
Claim typeOtherReview: Agent reviewed

Cornell's committee report states that any information educators are obligated to keep private under FERPA or HIPAA should not be shared with generative AI tools or uploaded to third-party AI vendors.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeOtherReview: Agent reviewed

Cornell's IT guidelines prohibit entering any confidential, proprietary, federally or state regulated, or otherwise sensitive or restricted Cornell information into public generative AI tools, as such information becomes public and may be stored and used by anyone.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeOtherReview: Agent reviewed

Cornell's AI course policy icons include a 'PP' (Privacy Protecting) icon indicating that generative AI use is permitted but no copyrighted or proprietary class materials should be uploaded unless otherwise specified.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Duke University

3 matching claims from 6 official sources.

Last checkedMay 12, 2026Review: Agent reviewedPublic JSON
Claim typeSecurity ReviewReview: Agent reviewed

Duke AI tool guidance describes ChatGPT as available to Duke University faculty, staff, and students, with sensitive-data use excluding PHI and governed by institutional agreement.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeResearchReview: Agent reviewed

Duke research guidance says researchers should document and publish AI decision-making alongside research and should not cite chatbot-summarized information they have not authenticated.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typePrivacyReview: Agent reviewed

Duke CTL assignment-design guidance advises that personal information should not be shared when using AI in assignments, to minimize privacy threats to students and instructors.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Imperial College London

3 matching claims from 14 official sources.

Last checkedMay 6, 2026Review: Agent reviewedPublic JSON
Claim typePrivacyReview: Agent reviewed

Imperial's dAIsy AI platform uses University SSO authentication with auditing. Prompts and metadata are logged for operational monitoring, and AI model providers are configured not to train on user data. Users' prompts and responses are not used to train external AI models. dAIsy is approved for use with unrestricted data within Imperial's secure infrastructure.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAcademic IntegrityReview: Agent reviewed

Breaches of Imperial's dAIsy Use Policy may lead to action under Academic Misconduct procedures for students and HR/disciplinary processes for staff, as well as under Information Security and Data Protection policies. Sanctions may include removal of access, grade penalties, or formal disciplinary measures.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeResearchReview: Agent reviewed

Research at Imperial that involves people, personal data, or sensitive topics may require ethics approval, a Data Protection Impact Assessment (DPIA), and data-governance controls before using any AI tool. Researchers must verify whether their use of AI in research requires special approval, particularly when uploading private or confidential research data.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

KU Leuven

3 matching claims from 5 official sources.

Last checkedMay 12, 2026Review: Agent reviewedPublic JSON
Claim typeSecurity ReviewReview: Agent reviewed

KU Leuven identifies Copilot logged in with a KU Leuven account as its recommended GenAI tool, citing contractual data protection, Enterprise technical security, and Microsoft not using entered data for further training.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typePrivacyReview: Agent reviewed

KU Leuven guidance says strictly confidential data should use only Copilot logged in with a KU Leuven account, and confidential or strictly confidential data require a security check if another AI tool is needed.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typePrivacyReview: Agent reviewed

KU Leuven safe-use guidance warns users to be careful with unsupported AI tools and not to enter personal data, confidential information, IP-sensitive data, or copyrighted material.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Lund University

3 matching claims from 5 official sources.

Last checkedMay 13, 2026Review: Agent reviewedPublic JSON
Claim typeProcurementReview: Agent reviewed

Lund University's Swedish policy says generative AI use is to comply with privacy and security laws and that procured tools or existing licensing agreements should be used in the first instance.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typePrivacyReview: Agent reviewed

Lund University's staff AI page says users may not write or upload sensitive material or sensitive personal data to ChatGPT, and may never upload medical information regardless of confidentiality status.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typePrivacyReview: Agent reviewed

Lund University's student guidance says students should primarily use Lund-licensed tools such as Microsoft Copilot Chat and Google Gemini, and must not upload other students' work, sensitive personal data, or copyright-protected material.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Stanford University

3 matching claims from 13 official sources.

Last checkedMay 6, 2026Review: Agent reviewedPublic JSON
Claim typePrivacyReview: Agent reviewed

Stanford School of Medicine MD and MSPA programs strictly prohibit entering confidential research data, patient data, or protected health information (PHI) into public AI platforms. Use of patient-identifying information or PHI in public AI tools is strictly forbidden. Students must use Stanford-approved AI platforms (e.g., Stanford Healthcare Secure GPT, Stanford AI Playground) when handling sensitive data.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeOtherReview: Agent reviewed

Stanford University IT (UIT) advises users to avoid inputting Moderate or High Risk Data into third-party AI platforms or tools not covered by a Stanford Business Associates Agreement, whether using a personal or Stanford account. Users should opt out of sharing chat data with third-party AI providers when possible.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeOtherReview: Agent reviewed

Stanford University Communications (UComm) has issued AI guidelines for marketing and communications staff requiring: human oversight of all AI-generated content (non-delegable personal responsibility), adherence to university policies, prohibition on inputting confidential or legally privileged information into generative AI tools, prohibition on using AI to promote for-profit organizations or engage in political advocacy, and prohibition on using high-risk data in prompts. Stanford AI Playground is recommended as the primary platform. These guidelines apply to all regular staff, interns, casual employees, and consultants in marketing and communications functions.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

The Chinese University of Hong Kong (CUHK)

3 matching claims from 7 official sources.

Last checkedMay 10, 2026Review: Agent reviewedPublic JSON
Claim typeAi Tool TreatmentReview: Agent reviewed

CUHK provides Microsoft Copilot Chat (Basic) free to all students and staff under the Microsoft 365 license, with enterprise data protection when signed in with CUHK account.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typePrivacyReview: Agent reviewed

CUHK's student AI guide establishes ethical principles of accountability, transparency, and acknowledgement for AI tool use. Users are accountable for AI-generated outputs and must fact-check all outputs.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typePrivacyReview: Agent reviewed

CUHK Library advises users to be aware of privacy policies of AI platforms, to opt out of data being used for model training where possible, and to avoid inputting confidential information into external AI tools.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

The London School of Economics and Political Science (LSE)

3 matching claims from 6 official sources.

Last checkedMay 12, 2026Review: Agent reviewedPublic JSON
Claim typePrivacyReview: Agent reviewed

LSE research guidance tells researchers not to share personal, sensitive, or confidential data with third-party AI tools unless the tools meet LSE privacy and security standards, and it strongly encourages Microsoft Copilot for privacy and security.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typePrivacyReview: Agent reviewed

LSE legal and regulatory guidance tells users not to put personal data, confidential or commercially sensitive data, or certain copyrighted content into external AI tools.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAi Tool TreatmentReview: Agent reviewed

LSE Claude for Education guidance says use is optional, student and staff data will not be used to train Anthropic models, and personal, operational, or confidential data must not be shared through Claude.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

The University of New South Wales (UNSW Sydney)

3 matching claims from 7 official sources.

Last checkedMay 6, 2026Review: Agent reviewedPublic JSON
Claim typeOtherReview: Agent reviewed

UNSW only authorises the use of Turnitin's AI Writing Detection Tool for detecting improper AI use in student work; UNSW IT has not approved other detection tools due to privacy and accuracy concerns.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeOtherReview: Agent reviewed

UNSW advises students not to include any personal or sensitive information in AI prompts, including addresses, names, emails, zID, or intellectual property, and recommends using Microsoft Copilot with a UNSW account for data privacy.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeOtherReview: Agent reviewed

UNSW activated Microsoft Copilot with Commercial Data Protection for all staff and students with a zID in May 2024, providing a secure platform where sensitive information is stored and accessed only by authorised staff.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

University of British Columbia

3 matching claims from 10 official sources.

Last checkedMay 10, 2026Review: Agent reviewedPublic JSON
Claim typeAcademic IntegrityReview: Agent reviewed

The use of applications to detect AI-generated content is strongly discouraged at UBC due to concerns about effectiveness, accuracy, bias, privacy, and intellectual property. Turnitin AI detection is not enabled.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typePrivacyReview: Agent reviewed

UBC says instructors or teaching assistants cannot require students to use GenAI or any other technology tool that requires sharing personal information unless the tool has undergone a UBC Privacy Impact Assessment review and been approved for use with personal information.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typePrivacyReview: Agent reviewed

Do not enter personal information into any generative AI tool that has not been through UBC's FIPPA compliance assessment (PIA), as to do so may be a breach of privacy.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

EPFL – École polytechnique fédérale de Lausanne

2 matching claims from 6 official sources.

Last checkedMay 10, 2026Review: Agent reviewedPublic JSON
Claim typePrivacyReview: Agent reviewed

EPFL advises students not to input confidential, private or personal information into generative AI tools. When using generative AI tools, students are sharing data with private companies and lose control over it.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typePrivacyReview: Agent reviewed

EPFL guidance says enterprise licenses such as Microsoft 365 Copilot via EPFL account are currently not a secure solution for processing regulated data because EPFL has not signed a data processing agreement guaranteeing aligned data protection measures.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

ETH Zurich

2 matching claims from 5 official sources.

Last checkedMay 5, 2026Review: Agent reviewedPublic JSON
Claim typeAi Tool TreatmentReview: Agent reviewed

ETH Zurich recommends Microsoft Copilot, Google Gemini, and NotebookLM for teaching purposes, as they offer data-protected access via ETH accounts where personal data is not used for training models.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typePrivacyReview: Agent reviewed

Students must refrain from disclosing copyrighted, private, or confidential information to commercial GenAI clients unless expressly permitted, and must respect privacy and copyright of content they work with.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Johns Hopkins University

2 matching claims from 7 official sources.

Last checkedMay 10, 2026Review: Needs reviewPublic JSON
Claim typeAcademic IntegrityReview: Needs review

JHU requires all uses of AI tools in any assignment to be disclosed, with FERPA guidelines referenced for data protection.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeProcurementReview: Needs review

JHU is working to ensure AI tools procured on behalf of the university meet privacy and security standards.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Korea University

2 matching claims from 3 official sources.

Last checkedMay 12, 2026Review: Agent reviewedPublic JSON
Claim typePrivacyReview: Agent reviewed

Korea University tells instructors not to enter personal information, academic records, assessment questions, or other sensitive or non-public materials into AI tools, with special caution for external AI services.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typePrivacyReview: Agent reviewed

Korea University tells learners not to enter personal information, non-public learning materials, or assessment questions into external AI tools, and to remember that AI inputs may be stored or reused.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Kyoto University

2 matching claims from 3 official sources.

Last checkedMay 12, 2026Review: Agent reviewedPublic JSON
Claim typeSecurity ReviewReview: Agent reviewed

Kyoto University states that highly confidential information that must not leave the university should not be entered into generative AI services.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typePrivacyReview: Agent reviewed

Kyoto University warns students not to casually enter privacy-related information, confidential information, or copyrighted works into generative AI prompts or uploaded data.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Massachusetts Institute of Technology (MIT)

2 matching claims from 4 official sources.

Last checkedMay 6, 2026Review: Agent reviewedPublic JSON
Claim typeOtherReview: Agent reviewed

Use of generative AI tools at MIT must comply with all applicable federal and state laws and orders (including FERPA, HIPAA, Massachusetts Data Protection Standards, export control laws, and the Executive Order on Safe, Secure, and Trustworthy Development and Use of AI), Institute policies (including 10.1 Academic and Research Misconduct, 11.0 Privacy and Disclosure of Personal Information, and 13.0 Information Policies), Information Protection guidelines, and the Institute's Written Information Security Program (WISP), plus any additional policies established by the user's department, lab, center, or institute (DLCI).

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeOtherReview: Agent reviewed

MIT departments, labs, centers, and institutes (DLCIs) already using a generative AI tool or service must ensure that the tool complies with all Institute policies and Information Protection guidelines, and must contact ai-guidance@mit.edu for consultation or assessment if needed.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

McGill University

2 matching claims from 5 official sources.

Last checkedMay 11, 2026Review: Agent reviewedPublic JSON
Claim typeAi Tool TreatmentReview: Agent reviewed

McGill explicitly rejects DeepSeek AI for McGill-managed or research-funded devices, rejects Read.AI and other AI meeting bots for McGill use, and says tools not mentioned in the available AI tools list are automatically considered rejected.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typePrivacyReview: Agent reviewed

McGill guidance says users should mitigate potential privacy concerns by removing personally identifying information when using AI tools, be careful with sensitive or restricted material, and avoid using Personal Health Information (PHI) or Payment Card Industry (PCI) data with AI tools.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

National Taiwan University (NTU)

2 matching claims from 4 official sources.

Last checkedMay 12, 2026Review: Agent reviewedPublic JSON
Claim typePrivacyReview: Agent reviewed

NTU Health Policy and Management conduct rules allow AI tools according to teaching and research needs, but require users to clearly explain the content and scope of use, respect privacy, disclose assistance sources, verify AI output, and take responsibility for results.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typePrivacyReview: Agent reviewed

NTU Social Work journal publication ethics says reviewers must not input manuscript content or review-related material into generative AI tools to help write review comments, in order to protect manuscript confidentiality and author rights.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Princeton University

2 matching claims from 8 official sources.

Last checkedMay 10, 2026Review: Agent reviewedPublic JSON
Claim typePrivacyReview: Agent reviewed

Princeton University requires that only University-licensed generative AI tools should be used with University Information classified as Internal or Confidential, and the use of publicly available generative AI tools in conjunction with such Princeton Information is not permitted by the University.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typePrivacyReview: Agent reviewed

Princeton University's OIT guidance states that non-public Princeton data should not be used in public generative AI tools, and that University Information classified as Restricted must not be used with any AI tool.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

The University of Edinburgh

2 matching claims from 6 official sources.

Last checkedMay 10, 2026Review: Agent reviewedPublic JSON
Claim typeAcademic IntegrityReview: Agent reviewed

At the University of Edinburgh, staff presenting AI-generated content as their own original work, uploading personal data or confidential information to external AI tools, and relying on AI detection tools are listed as unacceptable uses.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typePrivacyReview: Agent reviewed

The University of Edinburgh states that ELM chat histories are private to the individual user and are not accessible to lecturers for checking student work.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

The University of Manchester

2 matching claims from 6 official sources.

Last checkedMay 10, 2026Review: Agent reviewedPublic JSON
Claim typePrivacyReview: Agent reviewed

Copilot Chat is available to everyone with a University of Manchester account. The university has a contractual agreement with Microsoft ensuring prompts and uploaded files are private, protected by the same security and encryption as emails and OneDrive. The AI system does not learn from user prompts or data.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeAi Tool TreatmentReview: Agent reviewed

The University recommends Microsoft Copilot for AI-related work. It is GDPR-compliant and protects University and personal data. Staff should always carefully consider whether adding personal information into an AI tool is necessary.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

The University of Melbourne

2 matching claims from 5 official sources.

Last checkedMay 6, 2026Review: Agent reviewedPublic JSON
Claim typeOtherReview: Agent reviewed

The University of Melbourne advises researchers that they should not share confidential information or information about an innovation in a generative AI prompt, as that may mean the IP is no longer owned by the researcher or by the University.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeOtherReview: Agent reviewed

University of Melbourne students must not upload personal information (full name, date of birth, address, or other confidential/sensitive/private information) to GenAI tools.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

The University of Sydney

2 matching claims from 8 official sources.

Last checkedMay 10, 2026Review: Agent reviewedPublic JSON
Claim typeTeachingReview: Agent reviewed

The University of Sydney has adopted a 'two-lane approach' to assessment: Lane 1 comprises secure, in-person supervised assessments to assure learning, and Lane 2 comprises open assessments that support and scaffold the use of all available and relevant tools including generative AI.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typePrivacyReview: Agent reviewed

The University of Sydney's generative AI guardrails state that confidential, personal, proprietary, or otherwise sensitive information should not be entered into AI tools.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Tsinghua University

2 matching claims from 2 official sources.

Last checkedMay 6, 2026Review: Agent reviewedPublic JSON
Claim typeOtherReview: Agent reviewed

Tsinghua University prohibits teachers and students from using sensitive information, classified data, or unauthorized data to train or operate AI models.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typeOtherReview: Agent reviewed

Tsinghua University's Guiding Principles establish five core principles for AI in education: principal responsibility (AI as auxiliary tool, teachers and students as primary agents), compliance and integrity, data security, prudence and critical thinking, and fairness and inclusiveness.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Universiti Malaya (UM)

2 matching claims from 3 official sources.

Last checkedMay 12, 2026Review: Agent reviewedPublic JSON
Claim typePrivacyReview: Agent reviewed

Universiti Malaya guidance warns students not to upload confidential academic materials, research data, or university documents to public AI platforms without permission.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typePrivacyReview: Agent reviewed

Universiti Malaya's AI policy includes increasing awareness of ethical risks, copyright issues, data security including data privacy, and algorithmic bias in AI use.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

University of California, San Diego (UCSD)

2 matching claims from 6 official sources.

Last checkedMay 13, 2026Review: Agent reviewedPublic JSON
Claim typeSecurity ReviewReview: Agent reviewed

UC San Diego Blink says the university evaluates and supports AI services integrated into supported platforms using criteria that reflect University of California data protection and security policies, including Electronic Information Security, Electronic Communication, Export Control, and FERPA standards.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typePrivacyReview: Agent reviewed

A 2025 UC San Diego Senate-Administration Workgroup report says researchers should understand UC San Diego data classification policies, including which protected health information, human subjects data, student work, intellectual property, and similar data may not be used with non-secure GenAI tools.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

University of Toronto

2 matching claims from 7 official sources.

Last checkedMay 11, 2026Review: Agent reviewedPublic JSON
Claim typeAi Tool TreatmentReview: Agent reviewed

University of Toronto teaching guidance says Microsoft Copilot is the recommended generative AI tool to use at U of T and, when signed in with University credentials, conforms to U of T privacy and security standards for use with up to level 3 data.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Claim typePrivacyReview: Agent reviewed

University of Toronto Information Security guidelines state that no data should be provided to generative AI if any part of that data should not be included in results produced by that system, and users must verify AI tools have been assessed by the university as suitable for the data classification level before sharing personal information or university data classified as level 2, 3 or 4.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Fudan University

1 matching claim from 2 official sources.

Last checkedMay 11, 2026Review: Agent reviewedPublic JSON
Claim typeAcademic IntegrityReview: Agent reviewed

复旦大学学生工作部公开页面援引《复旦大学关于在本科毕业论文(设计)中使用 AI 工具的规定(试行)》称,本科毕业论文写作中使用 AI 工具辅助需经老师同意并披露使用情况,数据收集、核心观点提炼等创新工作不可依赖 AI,涉密和隐私内容禁止使用 AI。

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Ludwig-Maximilians-Universität München

1 matching claim from 3 official sources.

Last checkedMay 12, 2026Review: Agent reviewedPublic JSON
Claim typePrivacyReview: Agent reviewed

LMU IfKW guidance says material containing personal information must not be entered into AI systems without consent and unless German or EU data-protection standards are met.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Nanyang Technological University, Singapore (NTU Singapore)

1 matching claim from 3 official sources.

Last checkedMay 5, 2026Review: Agent reviewedPublic JSON
Claim typePrivacyReview: Agent reviewed

NTU prohibits uploading confidential, sensitive, or personal data to external generative AI platforms unless specific conditions are met: legal compliance, restricted access, no data retention, and written permission from data owners.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

New York University (NYU)

1 matching claim from 6 official sources.

Last checkedMay 12, 2026Review: Agent reviewedPublic JSON
Claim typePrivacyReview: Agent reviewed

NYU institutional accounts for Gemini and NotebookLM do not train AI models on user data and do not log queries or answers.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Sorbonne University

1 matching claim from 3 official sources.

Last checkedMay 13, 2026Review: Agent reviewedPublic JSON
Claim typePrivacyReview: Agent reviewed

Sorbonne University's Faculty of Health research recommendations say not to transmit non-public or unpublished content, personal data, confidential data, or sensitive data to a generative AI tool.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

The Hong Kong University of Science and Technology

1 matching claim from 7 official sources.

Last checkedMay 10, 2026Review: Agent reviewedPublic JSON
Claim typePrivacyReview: Agent reviewed

HKUST advises users not to input confidential, sensitive, or personal data into generative AI tools unless data is desensitized.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

The University of Auckland

1 matching claim from 7 official sources.

Last checkedMay 13, 2026Review: Agent reviewedPublic JSON
Claim typePrivacyReview: Agent reviewed

The University of Auckland's public TeachWell explainer for the AI Usage Standard says users should assess data against the University's data classification before submitting it to an AI tool, and says restricted data should not be used with AI chat services.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

The University of Hong Kong

1 matching claim from 3 official sources.

Last checkedMay 5, 2026Review: Agent reviewedPublic JSON
Claim typePrivacyReview: Agent reviewed

HKU states that input data used with generative AI should comply with data protection laws and university policies, and that users should avoid sharing sensitive, confidential, or proprietary information with GenAI platforms.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

The University of Queensland

1 matching claim from 5 official sources.

Last checkedMay 10, 2026Review: Agent reviewedPublic JSON
Claim typeAi Tool TreatmentReview: Agent reviewed

Microsoft Copilot Chat is UQ's enterprise AI tool, available to UQ staff and students, and the UQ Library says it provides a higher level of data security and privacy than other AI tools.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Université PSL

1 matching claim from 2 official sources.

Last checkedMay 11, 2026Review: Agent reviewedPublic JSON
Claim typePrivacyReview: Agent reviewed

PSL-affiliated guidance advises that personal data should not be shared with third parties when generative AI is used, and that generated text should be checked so it does not constitute plagiarism or contain personal data.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

University of California, Los Angeles (UCLA)

1 matching claim from 3 official sources.

Last checkedMay 11, 2026Review: Agent reviewedPublic JSON
Claim typePrivacyReview: Agent reviewed

UCLA DTS guidance says users may not input FERPA-protected student information, HIPPA/HIPAA-protected health data, employee personnel/performance data, unpublished research/IP/grant proposals, or export-controlled or restricted data into AI tools unless explicitly approved in a secure environment.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Yonsei University

1 matching claim from 1 official source.

Last checkedMay 12, 2026Review: Agent reviewedPublic JSON
Claim typePrivacyReview: Agent reviewed

Yonsei University Research Ethics Center guidance warns users not to enter confidential, sensitive, or personally identifiable information into generative AI tools.

Evidence records: 1. Original evidence remains canonical on the linked university record and public JSON.

Data and advice boundaries

Theme pages expose index slices, not new conclusions.

  • Public pages and public JSON should remain consistent because both are built from the promoted public release dataset.
  • Original-language evidence is canonical. Translations and display summaries are auxiliary.
  • Confidence is separate from reviewState; reviewState describes workflow status.
  • Tracker metadata is open licensed. Official source documents, page text, PDFs, and other source materials retain their original rights and terms.
  • This tracker is not legal advice, not academic integrity advice, and not an official university statement unless a linked source is the university's own official page.
  • Theme matching is based on visible public claim and evidence text; it is not a new review decision.

Browse all records at /universities or inspect the dataset at /api/public/v1/universities.json.