Change log

Columbia University

Source-check timeline, source snapshot hashes, claim review state, and a diff-style preview of current source-backed claim evidence.

Change summary

Current public record freshness and review state.

Columbia University currently has 16 source-backed claim records and 8 official source attributions. Latest tracked changed date: May 10, 2026.

This tracker is not legal advice, not academic integrity advice, and not an official university statement unless a linked source is the university's own official page.

Claim/evidence diff preview

Diff-style preview built from current public claim/evidence records. Full old/new source diffs require paired historical snapshots.

Columbia University current policy evidence

Inserted lines represent current public claim and evidence records in the source-backed dataset.

+20-0
11 # Columbia University AI policy record
2+academic_integrity: Columbia Law School prohibits generative AI use in exams, final papers, and for drafting any part of work submitted for credit, even if fully documented.
3+Evidence (en, 7306eb848579): Use of Generative AI is prohibited in (a) any exam or final paper or (b) for aid in drafting any part of work submitted for credit, even if the use is fully documented.
4+ai_tool_treatment: CUIMC provides HIPAA-compliant versions of ChatGPT Education and Microsoft Copilot as approved AI chatbot tools; workforce members must use CUIMC-issued accounts for compliance.
5+Evidence (en, 5370b6d21d35): Columbia University provides access to HIPAA-compliant versions of OpenAI's ChatGPT and Microsoft Copilot, enabling our workforce to leverage these AI tools responsibly and compliantly.
6+privacy: CUIMC restricts sensitive data (PHI, RHI, PII) use with AI to HIPAA-compliant platforms only (ChatGPT Education, approved Microsoft Copilot, CHAT with compliant models); research use requires IRB approval.
7+Evidence (en, 5370b6d21d35): Sensitive Data: Permitted only on the ChatGPT Education, approved Microsoft CoPilot platforms, and CU CHAT when used with compliant models. Research protocol use requires IRB, and TRAC/ACORD approval.
8+teaching: Columbia Law School's default AI prohibition can be overridden by individual instructors who set more permissive policies in writing in their syllabus.
9+Evidence (en, 7306eb848579): Individual instructors can, and indeed are encouraged to, tailor their own more permissive policies, so long as their policies are stated in writing in the syllabus.
10+ai_tool_treatment: Columbia Law School permits students to use generative AI for studying, brainstorming, and identifying typographical errors, but not for writing, editing, revising, or translating text.
11+Evidence (en, 7306eb848579): Students may use Generative AI to aid in studying, brainstorming, or to identify typographical errors.
12+privacy: Columbia Law School requires all generative AI use to comply with university data protection policy; confidential or personal information must not be shared with AI tools unless retention and training use is disabled.
13+Evidence (en, 7306eb848579): All uses of Generative AI must comply with University policy protecting confidential and personal information. By default, all text you enter into Generative AI tools is retained, used for training, and potentially outputted to other users.
14+ai_tool_treatment: As of March 2026, Google Gemini, NotebookLM, and Anthropic Claude are not approved for use with sensitive data at CUIMC; they may only be used with non-sensitive, non-confidential data.
15+Evidence (en, 5370b6d21d35): At this time (March 2026), the following AI Chat services offered through CUIT are not approved for use with Sensitive data: Google Gemini, NotebookLM, Anthropic Claude.
16+security_review: CUIMC requires a formal IT Risk Assessment review before deploying any locally installed AI models (LLM, NLP, ML) to evaluate security, privacy, and compliance risks.
17+Evidence (en, 5370b6d21d35): For locally installed AI models (i.e. LLM, NLP, ML), a formal IT Risk Assessment review is required before deployment to evaluate potential security, privacy, and compliance risks.
18+teaching: Teachers College provides five example syllabus statements ranging from no AI use permitted to generally permitted with attribution, allowing instructors to choose their stance.
19+Evidence (en, 14733654c797): Example 1: No Generative AI Use Permitted. Students are prohibited from using generative Artificial Intelligence (AI) tools to complete coursework or assignments for this class.
20+teaching: Teachers College example syllabus statements require citations or disclosure detailing specific AI tools and models used when AI use is permitted.
21+Evidence (en, 14733654c797): Example 5: Generative AI tools are generally permitted in this course for research and completion of assignments, provided that all AI-generated content is clearly attributed as such in the student's work.

Claim changes

16 claim records

academic_integrity

Columbia Law School prohibits generative AI use in exams, final papers, and for drafting any part of work submitted for credit, even if fully documented.

Review: Agent reviewedConfidence95%Evidence1Languagesen

ai_tool_treatment

CUIMC provides HIPAA-compliant versions of ChatGPT Education and Microsoft Copilot as approved AI chatbot tools; workforce members must use CUIMC-issued accounts for compliance.

Review: Agent reviewedConfidence95%Evidence1Languagesen

privacy

CUIMC restricts sensitive data (PHI, RHI, PII) use with AI to HIPAA-compliant platforms only (ChatGPT Education, approved Microsoft Copilot, CHAT with compliant models); research use requires IRB approval.

Review: Agent reviewedConfidence95%Evidence1Languagesen

teaching

Columbia Law School's default AI prohibition can be overridden by individual instructors who set more permissive policies in writing in their syllabus.

Review: Agent reviewedConfidence90%Evidence1Languagesen

ai_tool_treatment

Columbia Law School permits students to use generative AI for studying, brainstorming, and identifying typographical errors, but not for writing, editing, revising, or translating text.

Review: Agent reviewedConfidence90%Evidence1Languagesen

privacy

Columbia Law School requires all generative AI use to comply with university data protection policy; confidential or personal information must not be shared with AI tools unless retention and training use is disabled.

Review: Agent reviewedConfidence90%Evidence1Languagesen

ai_tool_treatment

As of March 2026, Google Gemini, NotebookLM, and Anthropic Claude are not approved for use with sensitive data at CUIMC; they may only be used with non-sensitive, non-confidential data.

Review: Agent reviewedConfidence90%Evidence1Languagesen

security_review

CUIMC requires a formal IT Risk Assessment review before deploying any locally installed AI models (LLM, NLP, ML) to evaluate security, privacy, and compliance risks.

Review: Agent reviewedConfidence90%Evidence1Languagesen

teaching

Teachers College provides five example syllabus statements ranging from no AI use permitted to generally permitted with attribution, allowing instructors to choose their stance.

Review: Agent reviewedConfidence90%Evidence1Languagesen

teaching

Teachers College example syllabus statements require citations or disclosure detailing specific AI tools and models used when AI use is permitted.

Review: Agent reviewedConfidence90%Evidence1Languagesen

teaching

All Teachers College example syllabus statements include provisions for students with disabilities who have AI-related accommodations through OASID.

Review: Agent reviewedConfidence85%Evidence1Languagesen

source_status

Columbia's Center for Teaching and Learning maintains a central AI Guidelines hub linking to the Provost's policy, academic integrity resources, CUIT resources, and best practices for responsible AI use.

Review: Needs reviewConfidence75%Evidence1Languagesen

source_status

Columbia University has a university-wide Generative AI Policy from the Office of the Provost governing use by staff, faculty, students, and researchers, covering information security, data privacy, copyright, academic integrity, and bias.

Review: Needs reviewConfidence70%Evidence1Languagesen

research

Columbia University requires researchers to avoid uploading unpublished research data or confidential information into generative AI tools.

Review: Needs reviewConfidence70%Evidence1Languagesen

academic_integrity

Columbia Business School requires students to disclose to faculty their use of generative AI platforms and the manner of use in coursework.

Review: Needs reviewConfidence70%Evidence1Languagesen

source_status

Columbia University Information Technology (CUIT) publishes university-wide best practices for responsible AI use applicable to faculty, students, researchers, and staff.

Review: Needs reviewConfidence60%Evidence1Languagesen

Source snapshots

8 source attributions