academic_integrity
Columbia Law School prohibits generative AI use in exams, final papers, and for drafting any part of work submitted for credit, even if fully documented.
Open, evidence-backed AI policy records for public reuse.
Change log
Source-check timeline, source snapshot hashes, claim review state, and a diff-style preview of current source-backed claim evidence.
Current public record freshness and review state.
Columbia University currently has 16 source-backed claim records and 8 official source attributions. Latest tracked changed date: May 10, 2026.
This tracker is not legal advice, not academic integrity advice, and not an official university statement unless a linked source is the university's own official page.
Diff-style preview built from current public claim/evidence records. Full old/new source diffs require paired historical snapshots.
Inserted lines represent current public claim and evidence records in the source-backed dataset.
16 claim records
Columbia Law School prohibits generative AI use in exams, final papers, and for drafting any part of work submitted for credit, even if fully documented.
CUIMC provides HIPAA-compliant versions of ChatGPT Education and Microsoft Copilot as approved AI chatbot tools; workforce members must use CUIMC-issued accounts for compliance.
CUIMC restricts sensitive data (PHI, RHI, PII) use with AI to HIPAA-compliant platforms only (ChatGPT Education, approved Microsoft Copilot, CHAT with compliant models); research use requires IRB approval.
Columbia Law School's default AI prohibition can be overridden by individual instructors who set more permissive policies in writing in their syllabus.
Columbia Law School permits students to use generative AI for studying, brainstorming, and identifying typographical errors, but not for writing, editing, revising, or translating text.
Columbia Law School requires all generative AI use to comply with university data protection policy; confidential or personal information must not be shared with AI tools unless retention and training use is disabled.
As of March 2026, Google Gemini, NotebookLM, and Anthropic Claude are not approved for use with sensitive data at CUIMC; they may only be used with non-sensitive, non-confidential data.
CUIMC requires a formal IT Risk Assessment review before deploying any locally installed AI models (LLM, NLP, ML) to evaluate security, privacy, and compliance risks.
Teachers College provides five example syllabus statements ranging from no AI use permitted to generally permitted with attribution, allowing instructors to choose their stance.
Teachers College example syllabus statements require citations or disclosure detailing specific AI tools and models used when AI use is permitted.
All Teachers College example syllabus statements include provisions for students with disabilities who have AI-related accommodations through OASID.
Columbia's Center for Teaching and Learning maintains a central AI Guidelines hub linking to the Provost's policy, academic integrity resources, CUIT resources, and best practices for responsible AI use.
Columbia University has a university-wide Generative AI Policy from the Office of the Provost governing use by staff, faculty, students, and researchers, covering information security, data privacy, copyright, academic integrity, and bias.
Columbia University requires researchers to avoid uploading unpublished research data or confidential information into generative AI tools.
Columbia Business School requires students to disclose to faculty their use of generative AI platforms and the manner of use in coursework.
Columbia University Information Technology (CUIT) publishes university-wide best practices for responsible AI use applicable to faculty, students, researchers, and staff.
8 source attributions
official_guidance checked May 9, 2026
official_guidance checked May 9, 2026
official_policy_page checked May 9, 2026
official_guidance checked May 9, 2026
official_guidance checked May 9, 2026
official_policy_page checked May 9, 2026
official_guidance checked May 9, 2026
official_guidance checked May 9, 2026