New Haven, United States

Yale University

Yale University is listed as QS 2026 rank 21. Yale University has 12 source-backed AI policy claim records from 8 official source attributions. The public record preserves original-language evidence snippets, source URLs, snapshot hashes, confidence, and review state.

Short answer

v1 public contract

Yale University is listed as QS 2026 rank 21. Yale University has 12 source-backed AI policy claim records from 8 official source attributions. The public record preserves original-language evidence snippets, source URLs, snapshot hashes, confidence, and review state.

Policy statusReviewed evidence-backed recordReview: Agent reviewedEvidence-backed claims12Reviewed12Candidate0Official sources8

This reference record summarizes visible public data only. Official sources and original-language evidence remain canonical; confidence is separate from review state.

This page is not legal advice, not academic integrity advice, and not an official university statement unless a linked source is the university's own official page.

Policy profile

Deterministic source-backed dimensions derived from this record's public claims.

Coverage score100/100Coverage labelbroad public coverageReview: Machine candidateAnalysis confidence80%

Policy profile rows are machine-candidate derived metadata. They are not final policy conclusions; inspect the linked claim evidence before reuse.

Analysis page-quality metadata is available at /api/public/v1/analysis/page-quality.json.

AI disclosure

Yale University has 2 source-backed public claims for ai disclosure; deterministic analysis status: unclear.

UnclearMachine candidateConfidence82%Evidence2Sources2

Research guidance

Yale University has 1 source-backed public claim for research guidance; deterministic analysis status: recommended.

RecommendedMachine candidateConfidence78%Evidence1Sources1

Security and procurement

Yale University has 1 source-backed public claim for security and procurement; deterministic analysis status: recommended.

RecommendedMachine candidateConfidence77%Evidence1Sources1

Coverage score measures breadth of public, source-backed coverage only. It is not a policy quality, strictness, legal adequacy, safety, or compliance score.

Evidence-backed claims

12 reviewed evidence-backed public claim

Academic Integrity

Yale academic integrity guidance treats inserting AI-generated text into an assignment without proper attribution as an academic integrity violation.

Review: Agent reviewedConfidence97%

Normalized value: ai_text_without_attribution_violation

Original evidence

Evidence 1
Inserting AI-generated text into an assignment without proper attribution is a violation of academic integrity, and using AI tools in a manner that was not authorized by your instructor may also be considered a breach of academic integrity.

Privacy

Yale guidance says confidential, legally restricted, moderate-risk, and high-risk Yale data should not be entered into AI tools.

Review: Agent reviewedConfidence96%

Normalized value: restricted_data_not_for_ai_tools

Original evidence

Evidence 1
Do not enter confidential or legally restricted data or any data that Yale's data classification policy identifies as moderate or high-risk into an AI tool.

Ai Tool Treatment

Yale lists Clarity Platform as a Yale-provided AI chatbot platform housed within Yale secure infrastructure and available to staff, faculty, and students.

Review: Agent reviewedConfidence96%

Normalized value: clarity_platform_yale_provided

Original evidence

Evidence 1
The Clarity Platform provides access to AI chatbots similar to OpenAI's ChatGPT or Microsoft Copilot Chat but housed within Yale's secure infrastructure... Available to: staff, faculty, and students.

Academic Integrity

Yale expects faculty to give clear instructions on permitted AI use and attribution, and expects students to follow instructor guidelines for coursework.

Review: Agent reviewedConfidence95%

Normalized value: instructor_policy_controls_coursework_use

Original evidence

Evidence 1
Faculty members are expected to provide clear instructions on the permitted use of generative AI tools for academic work and requirements for attribution. Likewise, students are expected to follow their instructors' guidelines about permitted use of AI for coursework.

Teaching

Yale states that instructors have authority within each course to determine whether and how students may use AI on assignments.

Review: Agent reviewedConfidence95%

Normalized value: course_level_instructor_authority

Original evidence

Evidence 1
Within each course, instructors at Yale have full authority to determine whether and how students may use AI when completing assignments.

Privacy

Yale Poorvu Center guidance says classroom AI use must comply with FERPA and instructors cannot require students to create external accounts for tools Yale does not directly license.

Review: Agent reviewedConfidence95%

Normalized value: ferpa_and_no_required_unlicensed_external_accounts

Original evidence

Evidence 1
Your use of AI tools in the classroom must comply with the Family Educational Rights and Privacy Act (FERPA). In particular, you cannot require students to create external accounts for tools Yale does not directly license.

Privacy

Yale describes Copilot Chat as not using conversations to train AI models or sharing data with OpenAI, while limiting high-risk data to Work search.

Review: Agent reviewedConfidence94%

Normalized value: copilot_chat_data_protection_work_search

Original evidence

Evidence 1
It does not use your conversations to train any AI model or share any data with OpenAI, ensuring your information remains private. Functionality is split into a Work and a Web tab. High risk data should only be used in the Work search.

Academic Integrity

The Yale Poorvu Center says it does not endorse AI detection software or enable such features in Canvas.

Review: Agent reviewedConfidence93%

Normalized value: ai_detection_not_endorsed

Original evidence

Evidence 1
Given the ever-evolving capabilities of AI the Poorvu Center doesn't endorse the use of AI detection software or enable such features in Canvas.

Ai Tool Treatment

Yale labels listed no-cost popular AI tools as informational only, not endorsed or provided by Yale, and for low-risk unsecured data experimentation and collaboration.

Review: Agent reviewedConfidence92%

Normalized value: external_tools_low_risk_personal_use_only

Original evidence

Evidence 1
This list is for informational purposes only as these tools are not endorsed by Yale. They are for personal use only and are not provided by the university. Use when handling low-risk, unsecured data for experimentation and collaboration.

Other

Yale guidance tells users to review and verify AI-generated outputs, especially before publication.

Review: Agent reviewedConfidence92%

Normalized value: review_verify_ai_outputs

Original evidence

Evidence 1
Always review and verify outputs generated by AI tools, especially before publication.

Procurement

Yale guidance directs people considering an AI product to conduct an initial review for institutional security requirements.

Review: Agent reviewedConfidence91%

Normalized value: ai_product_security_review

Original evidence

Evidence 1
If you are considering acquiring an AI product, please conduct an initial review of the tool to ensure that it conforms to institutional security requirements.

Teaching

Yale Poorvu Center guidance says generative AI use is subject to individual course policies and encourages instructors to adapt model policies to their course goals.

Review: Agent reviewedConfidence90%

Normalized value: model_course_ai_policies

Original evidence

Evidence 1
Generative AI use is subject to individual course policies. We encourage all instructors to adapt our model policies for their specific course and learning goals.

Candidate claims

0 machine or needs-review claim

Candidate claims are not final policy conclusions. They preserve source URL, source snapshot hash, evidence, confidence, and review state so the record can be audited before review.

Official sources

8 source attribution

Change log

Source-check timeline and diff-style claim/evidence preview.

View the public change record for this university, including source snapshot hashes, claim review states, and a diff-style preview of current source-backed evidence.

Last checkedMay 10, 2026Last changedMay 10, 2026Open change log

Corrections and missing evidence

Corrections create review tasks and do not directly change this public record.

If an official source is missing, stale, moved, blocked, or incorrectly summarized, submit a source URL, policy change report, or institution correction for review. Corrections must preserve source URLs, source language, original evidence, review state, and audit history.

Back to universities