New York City, United States

Columbia University

Columbia University is listed as QS 2026 rank =38. Columbia University has 16 source-backed AI policy claim records from 8 official source attributions. The public record preserves original-language evidence snippets, source URLs, snapshot hashes, confidence, and review state.

Citation-ready overview

v1 public contract

Columbia University is listed as QS 2026 rank =38. Columbia University has 16 source-backed AI policy claim records from 8 official source attributions. The public record preserves original-language evidence snippets, source URLs, snapshot hashes, confidence, and review state.

Reviewed claims11Candidate claims5Official sources8

Candidate claims are source-backed records pending review. They are not final policy conclusions and are not legal or academic integrity advice.

Reviewed claims

11 reviewed public claim

Academic Integrity

Columbia Law School prohibits generative AI use in exams, final papers, and for drafting any part of work submitted for credit, even if fully documented.

Review: Agent reviewedConfidence95%

Original evidence

Evidence 1
Use of Generative AI is prohibited in (a) any exam or final paper or (b) for aid in drafting any part of work submitted for credit, even if the use is fully documented.

Ai Tool Treatment

CUIMC provides HIPAA-compliant versions of ChatGPT Education and Microsoft Copilot as approved AI chatbot tools; workforce members must use CUIMC-issued accounts for compliance.

Review: Agent reviewedConfidence95%

Original evidence

Evidence 1
Columbia University provides access to HIPAA-compliant versions of OpenAI's ChatGPT and Microsoft Copilot, enabling our workforce to leverage these AI tools responsibly and compliantly.

Privacy

CUIMC restricts sensitive data (PHI, RHI, PII) use with AI to HIPAA-compliant platforms only (ChatGPT Education, approved Microsoft Copilot, CHAT with compliant models); research use requires IRB approval.

Review: Agent reviewedConfidence95%

Original evidence

Evidence 1
Sensitive Data: Permitted only on the ChatGPT Education, approved Microsoft CoPilot platforms, and CU CHAT when used with compliant models. Research protocol use requires IRB, and TRAC/ACORD approval.

Teaching

Columbia Law School's default AI prohibition can be overridden by individual instructors who set more permissive policies in writing in their syllabus.

Review: Agent reviewedConfidence90%

Original evidence

Evidence 1
Individual instructors can, and indeed are encouraged to, tailor their own more permissive policies, so long as their policies are stated in writing in the syllabus.

Ai Tool Treatment

Columbia Law School permits students to use generative AI for studying, brainstorming, and identifying typographical errors, but not for writing, editing, revising, or translating text.

Review: Agent reviewedConfidence90%

Original evidence

Evidence 1
Students may use Generative AI to aid in studying, brainstorming, or to identify typographical errors.

Privacy

Columbia Law School requires all generative AI use to comply with university data protection policy; confidential or personal information must not be shared with AI tools unless retention and training use is disabled.

Review: Agent reviewedConfidence90%

Original evidence

Evidence 1
All uses of Generative AI must comply with University policy protecting confidential and personal information. By default, all text you enter into Generative AI tools is retained, used for training, and potentially outputted to other users.

Ai Tool Treatment

As of March 2026, Google Gemini, NotebookLM, and Anthropic Claude are not approved for use with sensitive data at CUIMC; they may only be used with non-sensitive, non-confidential data.

Review: Agent reviewedConfidence90%

Original evidence

Evidence 1
At this time (March 2026), the following AI Chat services offered through CUIT are not approved for use with Sensitive data: Google Gemini, NotebookLM, Anthropic Claude.

Security Review

CUIMC requires a formal IT Risk Assessment review before deploying any locally installed AI models (LLM, NLP, ML) to evaluate security, privacy, and compliance risks.

Review: Agent reviewedConfidence90%

Original evidence

Evidence 1
For locally installed AI models (i.e. LLM, NLP, ML), a formal IT Risk Assessment review is required before deployment to evaluate potential security, privacy, and compliance risks.

Teaching

Teachers College provides five example syllabus statements ranging from no AI use permitted to generally permitted with attribution, allowing instructors to choose their stance.

Review: Agent reviewedConfidence90%

Original evidence

Evidence 1
Example 1: No Generative AI Use Permitted. Students are prohibited from using generative Artificial Intelligence (AI) tools to complete coursework or assignments for this class.

Teaching

Teachers College example syllabus statements require citations or disclosure detailing specific AI tools and models used when AI use is permitted.

Review: Agent reviewedConfidence90%

Original evidence

Evidence 1
Example 5: Generative AI tools are generally permitted in this course for research and completion of assignments, provided that all AI-generated content is clearly attributed as such in the student's work.

Teaching

All Teachers College example syllabus statements include provisions for students with disabilities who have AI-related accommodations through OASID.

Review: Agent reviewedConfidence85%

Original evidence

Evidence 1
Students with disabilities are eligible for reasonable accommodations to permit them equal access to Teachers College programs and services (which include classes and coursework). If you are a student registered with OASID with a generative AI accommodation, please speak with me directly about your needs.

Candidate claims

5 machine or needs-review claim

Candidate claims are not final policy conclusions. They preserve source URL, source snapshot hash, evidence, confidence, and review state so the record can be audited before review.

Source Status

Columbia's Center for Teaching and Learning maintains a central AI Guidelines hub linking to the Provost's policy, academic integrity resources, CUIT resources, and best practices for responsible AI use.

Review: Needs reviewConfidence75%

This claim is held for review because the evidence or classification needs another pass.

Original evidence

Evidence 1
Columbia University AI Guidelines hub from CTL. Links to Provost policy, academic integrity resources, CUIT resources, best practices for responsible AI use. Central hub for Columbia AI resources.

Source Status

Columbia University has a university-wide Generative AI Policy from the Office of the Provost governing use by staff, faculty, students, and researchers, covering information security, data privacy, copyright, academic integrity, and bias.

Review: Needs reviewConfidence70%

This claim is held for review because the evidence or classification needs another pass.

Original evidence

Evidence 1
This Generative AI policy governs the use of Generative AI tools by staff, faculty, students, and researchers. Covers information security, data privacy, copyright, academic integrity, bias.

Research

Columbia University requires researchers to avoid uploading unpublished research data or confidential information into generative AI tools.

Review: Needs reviewConfidence70%

This claim is held for review because the evidence or classification needs another pass.

Original evidence

Evidence 1
Researchers must avoid uploading, or using as input, any unpublished research data or other Confidential Information into a Generative AI tool.

Academic Integrity

Columbia Business School requires students to disclose to faculty their use of generative AI platforms and the manner of use in coursework.

Review: Needs reviewConfidence70%

This claim is held for review because the evidence or classification needs another pass.

Original evidence

Evidence 1
As a general rule, students should disclose to faculty if they are using generative AI platforms and in what manner they are using them in coursework.

Source Status

Columbia University Information Technology (CUIT) publishes university-wide best practices for responsible AI use applicable to faculty, students, researchers, and staff.

Review: Needs reviewConfidence60%

This claim is held for review because the evidence or classification needs another pass.

Original evidence

Evidence 1
CUIT university-wide best practices for responsible AI use. Referenced from ai.ctl.columbia.edu hub. Applies to faculty, students, researchers, and staff.

Official sources

8 source attribution

Back to universities