other
Cornell's committee report states that any information educators are obligated to keep private under FERPA or HIPAA should not be shared with generative AI tools or uploaded to third-party AI vendors.
Open, evidence-backed AI policy records for public reuse.
Change log
Source-check timeline, source snapshot hashes, claim review state, and a diff-style preview of current source-backed claim evidence.
Current public record freshness and review state.
Cornell University currently has 26 source-backed claim records and 6 official source attributions. Latest tracked changed date: May 6, 2026.
This tracker is not legal advice, not academic integrity advice, and not an official university statement unless a linked source is the university's own official page.
Diff-style preview built from current public claim/evidence records. Full old/new source diffs require paired historical snapshots.
Inserted lines represent current public claim and evidence records in the source-backed dataset.
26 claim records
Cornell's committee report states that any information educators are obligated to keep private under FERPA or HIPAA should not be shared with generative AI tools or uploaded to third-party AI vendors.
Cornell's committee report states that original research or content owned by Cornell University, students, or employees should not be uploaded to AI tools, as it can become part of the AI tool's training data.
Cornell's committee report does not recommend the use of generative AI for summative evaluation or grading of student work, stating that evaluation and grading is among the most important tasks entrusted to faculty.
Cornell's committee report recommends three policy approaches for generative AI use: prohibit GAI where it interferes with foundational learning, allow with attribution where it supports higher-level thinking, and encourage use where it enables exploration and creative thinking.
Cornell's committee report discourages the use of automatic detection algorithms for academic integrity violations using generative AI, stating they cannot decisively provide evidence and could lead to unfairly identifying violations, including bias against non-native speakers.
Cornell's IT guidelines state that users are accountable for their work regardless of the tools used to produce it, and when using generative AI tools must always verify information for errors and biases and exercise caution to avoid copyright infringement.
Cornell's IT guidelines prohibit entering any confidential, proprietary, federally or state regulated, or otherwise sensitive or restricted Cornell information into public generative AI tools, as such information becomes public and may be stored and used by anyone.
Cornell has established seven core principles for generative AI in education: integrity of the faculty-student relation, commitment to experimentation and evidence, centrality of faculty judgment, responsiveness to student needs, recognition of both AI goods and harms, respect for institutional and disciplinary heterogeneity, and renewal of Cornell's core mission and values.
Cornell recommends that faculty clearly communicate their generative AI policies in their syllabus, in assignment instructions, and verbally in class to support student learning and reduce academic integrity violations.
Cornell does not recommend using automatic AI detection algorithms for academic integrity violations, citing their unreliability and inability to provide definitive evidence, and the risk of wrongly accusing students.
Cornell's committee report recommends that the Code of Academic Integrity be updated with clear and explicit language on the use of generative AI, indicating that individual faculty have authority to determine when AI use is prohibited, attributed, or encouraged.
Cornell's IT guidelines endorse a flexible framework in which faculty and instructors can choose to prohibit, allow with attribution, or encourage generative AI use in education.
Cornell established the GenAI Education Working Group in spring 2024 with all-college representation including faculty, staff, and students, as the central body for developing new ideas, policies, and practices around generative AI in the classroom.
Cornell developed seven standardized AI course policy icons (ANY-AI, AT, UA, PP, AS, ER, AI-FREE) to help instructors clearly communicate AI use expectations in syllabi and assignments, which can be combined for nuanced policies.
When generative AI use is permitted in a course, Cornell advises instructors to clarify expectations for documentation and attribution, including citing the AI tool creator (e.g., OpenAI for ChatGPT) when directly quoting AI-generated text in both in-text citations and reference lists.
Cornell's Center for Teaching Innovation recommends that faculty discuss course policies and expectations around the use of generative AI tools with their students and clearly communicate when and in what ways use of such tools is permitted or not.
Cornell developed a set of course policy icons through the GenAI Advisory Council to help instructors clearly and consistently communicate AI use expectations to students in syllabi and assignment instructions.
Cornell holds students responsible for verifying the accuracy of AI-generated output and references when AI use is allowed for an assignment.
Cornell describes measures faculty may use to evaluate potential AI-related academic integrity concerns, including requiring students to verify citations and references, requesting verification of references or methods, and informing students that they should expect to verbally explain submitted work.
Cornell's IT guidelines state that use of generative AI in academic research is governed by the Cornell University Task Force Report 'Generative AI in Academic Research: Perspectives and Cultural Norms' (December 2023).
Cornell's IT guidelines state that use of generative AI for administrative purposes must comply with the Cornell Generative AI in Administration Task Force Report (January 2024).
Cornell's 'AI+AI' initiative aims to strengthen and update academic integrity procedures to better reflect the presence of generative AI, including better models for attributing student use of GenAI and development of evidentiary standards for adjudicating GenAI-related violations.
Cornell considers generative AI literacy essential for both students and faculty, defining it as the ability to understand, evaluate, and critically engage with generative AI technologies.
Cornell provides sample syllabus language for an AI-FREE policy that prohibits all generative AI tools, including tools that help reorganize and edit written work, to ensure development of foundational concepts and skills.
Cornell provides sample syllabus language for an AS-UA policy where AI use is generally discouraged but permitted for select assignments with proper attribution, requiring students to cite the AI tool creator.
Cornell's AI course policy icons include a 'PP' (Privacy Protecting) icon indicating that generative AI use is permitted but no copyrighted or proprietary class materials should be uploaded unless otherwise specified.
6 source attributions
official_guidance checked May 6, 2026
official_guidance checked May 6, 2026
official_guidance checked May 6, 2026
official_guidance checked May 6, 2026
official_guidance checked May 6, 2026
official_guidance checked May 6, 2026