Change log

Yale University

Source-check timeline, source snapshot hashes, claim review state, and a diff-style preview of current source-backed claim evidence.

Change summary

Current public record freshness and review state.

Yale University currently has 12 source-backed claim records and 8 official source attributions. Latest tracked changed date: May 10, 2026.

This tracker is not legal advice, not academic integrity advice, and not an official university statement unless a linked source is the university's own official page.

Claim/evidence diff preview

Diff-style preview built from current public claim/evidence records. Full old/new source diffs require paired historical snapshots.

Yale University current policy evidence

Inserted lines represent current public claim and evidence records in the source-backed dataset.

+20-0
11 # Yale University AI policy record
2+academic_integrity: Yale academic integrity guidance treats inserting AI-generated text into an assignment without proper attribution as an academic integrity violation.
3+Evidence (en, bbac849b9594): Inserting AI-generated text into an assignment without proper attribution is a violation of academic integrity, and using AI tools in a manner that was not authorized by your instructor may also be considered a breach of academic integrity.
4+privacy: Yale guidance says confidential, legally restricted, moderate-risk, and high-risk Yale data should not be entered into AI tools.
5+Evidence (en, cf53048e544a): Do not enter confidential or legally restricted data or any data that Yale's data classification policy identifies as moderate or high-risk into an AI tool.
6+ai_tool_treatment: Yale lists Clarity Platform as a Yale-provided AI chatbot platform housed within Yale secure infrastructure and available to staff, faculty, and students.
7+Evidence (en, 688e40fe0801): The Clarity Platform provides access to AI chatbots similar to OpenAI's ChatGPT or Microsoft Copilot Chat but housed within Yale's secure infrastructure... Available to: staff, faculty, and students.
8+academic_integrity: Yale expects faculty to give clear instructions on permitted AI use and attribution, and expects students to follow instructor guidelines for coursework.
9+Evidence (en, cf53048e544a): Faculty members are expected to provide clear instructions on the permitted use of generative AI tools for academic work and requirements for attribution. Likewise, students are expected to follow their instructors' guidelines about permitted use of AI for coursework.
10+teaching: Yale states that instructors have authority within each course to determine whether and how students may use AI on assignments.
11+Evidence (en, 2cd0d4650391): Within each course, instructors at Yale have full authority to determine whether and how students may use AI when completing assignments.
12+privacy: Yale Poorvu Center guidance says classroom AI use must comply with FERPA and instructors cannot require students to create external accounts for tools Yale does not directly license.
13+Evidence (en, 43854432f3c0): Your use of AI tools in the classroom must comply with the Family Educational Rights and Privacy Act (FERPA). In particular, you cannot require students to create external accounts for tools Yale does not directly license.
14+privacy: Yale describes Copilot Chat as not using conversations to train AI models or sharing data with OpenAI, while limiting high-risk data to Work search.
15+Evidence (en, 688e40fe0801): It does not use your conversations to train any AI model or share any data with OpenAI, ensuring your information remains private. Functionality is split into a Work and a Web tab. High risk data should only be used in the Work search.
16+academic_integrity: The Yale Poorvu Center says it does not endorse AI detection software or enable such features in Canvas.
17+Evidence (en, bbac849b9594): Given the ever-evolving capabilities of AI the Poorvu Center doesn't endorse the use of AI detection software or enable such features in Canvas.
18+ai_tool_treatment: Yale labels listed no-cost popular AI tools as informational only, not endorsed or provided by Yale, and for low-risk unsecured data experimentation and collaboration.
19+Evidence (en, 688e40fe0801): This list is for informational purposes only as these tools are not endorsed by Yale. They are for personal use only and are not provided by the university. Use when handling low-risk, unsecured data for experimentation and collaboration.
20+other: Yale guidance tells users to review and verify AI-generated outputs, especially before publication.
21+Evidence (en, cf53048e544a): Always review and verify outputs generated by AI tools, especially before publication.

Claim changes

12 claim records

academic_integrity

Yale academic integrity guidance treats inserting AI-generated text into an assignment without proper attribution as an academic integrity violation.

Review: Agent reviewedConfidence97%Evidence1Languagesen

privacy

Yale guidance says confidential, legally restricted, moderate-risk, and high-risk Yale data should not be entered into AI tools.

Review: Agent reviewedConfidence96%Evidence1Languagesen

ai_tool_treatment

Yale lists Clarity Platform as a Yale-provided AI chatbot platform housed within Yale secure infrastructure and available to staff, faculty, and students.

Review: Agent reviewedConfidence96%Evidence1Languagesen

academic_integrity

Yale expects faculty to give clear instructions on permitted AI use and attribution, and expects students to follow instructor guidelines for coursework.

Review: Agent reviewedConfidence95%Evidence1Languagesen

teaching

Yale states that instructors have authority within each course to determine whether and how students may use AI on assignments.

Review: Agent reviewedConfidence95%Evidence1Languagesen

privacy

Yale Poorvu Center guidance says classroom AI use must comply with FERPA and instructors cannot require students to create external accounts for tools Yale does not directly license.

Review: Agent reviewedConfidence95%Evidence1Languagesen

privacy

Yale describes Copilot Chat as not using conversations to train AI models or sharing data with OpenAI, while limiting high-risk data to Work search.

Review: Agent reviewedConfidence94%Evidence1Languagesen

academic_integrity

The Yale Poorvu Center says it does not endorse AI detection software or enable such features in Canvas.

Review: Agent reviewedConfidence93%Evidence1Languagesen

ai_tool_treatment

Yale labels listed no-cost popular AI tools as informational only, not endorsed or provided by Yale, and for low-risk unsecured data experimentation and collaboration.

Review: Agent reviewedConfidence92%Evidence1Languagesen

other

Yale guidance tells users to review and verify AI-generated outputs, especially before publication.

Review: Agent reviewedConfidence92%Evidence1Languagesen

procurement

Yale guidance directs people considering an AI product to conduct an initial review for institutional security requirements.

Review: Agent reviewedConfidence91%Evidence1Languagesen

teaching

Yale Poorvu Center guidance says generative AI use is subject to individual course policies and encourages instructors to adapt model policies to their course goals.

Review: Agent reviewedConfidence90%Evidence1Languagesen

Source snapshots

8 source attributions