academic_integrity
Yale academic integrity guidance treats inserting AI-generated text into an assignment without proper attribution as an academic integrity violation.
Open, evidence-backed AI policy records for public reuse.
Change log
Source-check timeline, source snapshot hashes, claim review state, and a diff-style preview of current source-backed claim evidence.
Current public record freshness and review state.
Yale University currently has 12 source-backed claim records and 8 official source attributions. Latest tracked changed date: May 10, 2026.
This tracker is not legal advice, not academic integrity advice, and not an official university statement unless a linked source is the university's own official page.
Diff-style preview built from current public claim/evidence records. Full old/new source diffs require paired historical snapshots.
Inserted lines represent current public claim and evidence records in the source-backed dataset.
12 claim records
Yale academic integrity guidance treats inserting AI-generated text into an assignment without proper attribution as an academic integrity violation.
Yale guidance says confidential, legally restricted, moderate-risk, and high-risk Yale data should not be entered into AI tools.
Yale lists Clarity Platform as a Yale-provided AI chatbot platform housed within Yale secure infrastructure and available to staff, faculty, and students.
Yale expects faculty to give clear instructions on permitted AI use and attribution, and expects students to follow instructor guidelines for coursework.
Yale states that instructors have authority within each course to determine whether and how students may use AI on assignments.
Yale Poorvu Center guidance says classroom AI use must comply with FERPA and instructors cannot require students to create external accounts for tools Yale does not directly license.
Yale describes Copilot Chat as not using conversations to train AI models or sharing data with OpenAI, while limiting high-risk data to Work search.
The Yale Poorvu Center says it does not endorse AI detection software or enable such features in Canvas.
Yale labels listed no-cost popular AI tools as informational only, not endorsed or provided by Yale, and for low-risk unsecured data experimentation and collaboration.
Yale guidance tells users to review and verify AI-generated outputs, especially before publication.
Yale guidance directs people considering an AI product to conduct an initial review for institutional security requirements.
Yale Poorvu Center guidance says generative AI use is subject to individual course policies and encourages instructors to adapt model policies to their course goals.
8 source attributions
official_guidance checked May 10, 2026
official_guidance checked May 10, 2026
official_guidance checked May 10, 2026
official_guidance checked May 10, 2026
official_guidance checked May 10, 2026
official_guidance checked May 10, 2026
official_guidance checked May 10, 2026
official_guidance checked May 10, 2026