Ithaca, United States

Cornell University

Cornell University is listed as QS 2026 rank 16. Cornell University has 26 source-backed AI policy claim records from 6 official source attributions. The public record preserves original-language evidence snippets, source URLs, snapshot hashes, confidence, and review state.

Citation-ready overview

v1 public contract

Cornell University is listed as QS 2026 rank 16. Cornell University has 26 source-backed AI policy claim records from 6 official source attributions. The public record preserves original-language evidence snippets, source URLs, snapshot hashes, confidence, and review state.

Reviewed claims26Candidate claims0Official sources6

Candidate claims are source-backed records pending review. They are not final policy conclusions and are not legal or academic integrity advice.

Reviewed claims

26 reviewed public claim

Other

Cornell's committee report states that any information educators are obligated to keep private under FERPA or HIPAA should not be shared with generative AI tools or uploaded to third-party AI vendors.

Review: Agent reviewedConfidence95%

Original evidence

Evidence 1
GAI tools pose potential privacy risks because data that is shared may be used as training data by the third-party vendor providing the service. Therefore, any information that educators are obligated to keep private, for example, under the Family Educational Rights and Privacy Act (FERPA) or the Health Insurance Portability and Accountability Act (HIPAA), should not be shared with such tools or uploaded to these third party vendors of GAI.

Other

Cornell's committee report states that original research or content owned by Cornell University, students, or employees should not be uploaded to AI tools, as it can become part of the AI tool's training data.

Review: Agent reviewedConfidence95%

Original evidence

Evidence 1
GAI tools also have implications for intellectual property rights. Original research or content that is owned by Cornell University, our students, or employees should not be uploaded to these tools, since they can become part of the training data used by the GAI tools.

Other

Cornell's committee report does not recommend the use of generative AI for summative evaluation or grading of student work, stating that evaluation and grading is among the most important tasks entrusted to faculty.

Review: Agent reviewedConfidence95%

Original evidence

Evidence 1
While GAI may have selective utility in assisting in providing feedback for low-stakes formative assessment (for example in practice problems), we currently do NOT recommend it be used in summative evaluation of student work. Evaluation and grading of students is among the most important tasks entrusted to faculty, and the integrity of the grading process is reliant on the primary role of the faculty member.

Other

Cornell's committee report recommends three policy approaches for generative AI use: prohibit GAI where it interferes with foundational learning, allow with attribution where it supports higher-level thinking, and encourage use where it enables exploration and creative thinking.

Review: Agent reviewedConfidence95%

Original evidence

Evidence 1
We recommend instructors consider three kinds of policies either for individual assignments or generally in their courses. To prohibit the use of GAI where it interferes with the student developing foundational understanding, skills, and knowledge needed for future courses and careers. To allow with attribution where GAI could be a useful resource, but the instructor needs to be aware of its use by the student and the student must learn to take responsibility for accuracy and correct attribution of GAI-generated content. To encourage and actively integrate GAI into the learning process where students can leverage GAI to focus on higher-level learning objectives, explore creative ideas, or...

Other

Cornell's committee report discourages the use of automatic detection algorithms for academic integrity violations using generative AI, stating they cannot decisively provide evidence and could lead to unfairly identifying violations, including bias against non-native speakers.

Review: Agent reviewedConfidence95%

Original evidence

Evidence 1
We currently discourage the use of automatic detection algorithms for academic integrity violations using GAI, given their unreliability and current inability to provide definitive evidence of violations.

Other

Cornell's IT guidelines state that users are accountable for their work regardless of the tools used to produce it, and when using generative AI tools must always verify information for errors and biases and exercise caution to avoid copyright infringement.

Review: Agent reviewedConfidence95%

Original evidence

Evidence 1
You are accountable for your work, regardless of the tools you use to produce it. When using generative AI tools, always verify the information for errors and biases and exercise caution to avoid copyright infringement.

Other

Cornell's IT guidelines prohibit entering any confidential, proprietary, federally or state regulated, or otherwise sensitive or restricted Cornell information into public generative AI tools, as such information becomes public and may be stored and used by anyone.

Review: Agent reviewedConfidence95%

Original evidence

Evidence 1
If you are using public generative AI tools, you cannot enter any Cornell information, or another person's information, that is confidential, proprietary, subject to federal or state regulations, or otherwise considered sensitive or restricted. Any information you provide to public generative AI tools is considered public and may be stored and used by anyone else.

Other

Cornell has established seven core principles for generative AI in education: integrity of the faculty-student relation, commitment to experimentation and evidence, centrality of faculty judgment, responsiveness to student needs, recognition of both AI goods and harms, respect for institutional and disciplinary heterogeneity, and renewal of Cornell's core mission and values.

Review: Agent reviewedConfidence95%

Original evidence

Evidence 1
Cornell's response to generative AI in teaching and learning is built around seven core principles. We invite instructors to consider these principles as they make decisions and talk with their students and colleagues about generative AI and learning: The integrity of the faculty-student relation. A commitment to experimentation, evidence and learning from experience. The centrality of faculty judgment and expertise in the classroom. Responsiveness to real student needs and uses. Recognition of both AI 'goods' and 'harms'. Respect for institutional and disciplinary heterogeneity. The extension and renewal of Cornell's core mission and values.

Other

Cornell recommends that faculty clearly communicate their generative AI policies in their syllabus, in assignment instructions, and verbally in class to support student learning and reduce academic integrity violations.

Review: Agent reviewedConfidence95%

Original evidence

Evidence 1
To best support student learning and reduce violations of academic integrity, be sure to clearly communicate your policies regarding the use of generative AI in your syllabus, in assignment instructions, and verbally in class.

Other

Cornell does not recommend using automatic AI detection algorithms for academic integrity violations, citing their unreliability and inability to provide definitive evidence, and the risk of wrongly accusing students.

Review: Agent reviewedConfidence95%

Original evidence

Evidence 1
We currently do not recommend using current automatic detection algorithms for academic integrity violations using generative AI, given their unreliability and current inability to provide definitive evidence of violations.

Other

Cornell's committee report recommends that the Code of Academic Integrity be updated with clear and explicit language on the use of generative AI, indicating that individual faculty have authority to determine when AI use is prohibited, attributed, or encouraged.

Review: Agent reviewedConfidence93%

Original evidence

Evidence 1
The Code of Academic Integrity should be updated with clear and explicit language on the use of GAI, specifically indicating that individual faculty have authority to determine when its use is prohibited, attributed, or encouraged, and that use of GAI on assignments by students is only allowed when expressly permitted by the faculty member.

Other

Cornell's IT guidelines endorse a flexible framework in which faculty and instructors can choose to prohibit, allow with attribution, or encourage generative AI use in education.

Review: Agent reviewedConfidence93%

Original evidence

Evidence 1
Cornell encourages a flexible framework in which faculty and instructors can choose to prohibit, to allow with attribution, or to encourage generative AI use.

Other

Cornell established the GenAI Education Working Group in spring 2024 with all-college representation including faculty, staff, and students, as the central body for developing new ideas, policies, and practices around generative AI in the classroom.

Review: Agent reviewedConfidence93%

Original evidence

Evidence 1
Established in spring 2024 with all-college representation including faculty, staff and students, the Cornell GenAI Education Working Group, a part of the university-wide AI Advisory Council, is the central place in which new ideas, policies and practices around GenAI in the classroom are being worked out at Cornell.

Other

Cornell developed seven standardized AI course policy icons (ANY-AI, AT, UA, PP, AS, ER, AI-FREE) to help instructors clearly communicate AI use expectations in syllabi and assignments, which can be combined for nuanced policies.

Review: Agent reviewedConfidence93%

Original evidence

Evidence 1
These icons have been developed to help you clearly and consistently communicate your expectations. They can be used on the syllabus to convey an overall course approach. They can also be used for individual assignments, allowing you to distinguish different policies for different assignments with different learning goals. Icons can be combined to more fully reflect your course policy.

Other

When generative AI use is permitted in a course, Cornell advises instructors to clarify expectations for documentation and attribution, including citing the AI tool creator (e.g., OpenAI for ChatGPT) when directly quoting AI-generated text in both in-text citations and reference lists.

Review: Agent reviewedConfidence93%

Original evidence

Evidence 1
When generative AI is permitted, clarify expectations for documentation and attribution, as well as what aspects of the work should be produced by the students themselves. ... students should attribute directly quoted text to the creator of the generative AI tool used (e.g., cite OpenAI when directly quoting ChatGPT). This attribution should be used for both in-text citations and your reference list.

Other

Cornell's Center for Teaching Innovation recommends that faculty discuss course policies and expectations around the use of generative AI tools with their students and clearly communicate when and in what ways use of such tools is permitted or not.

Review: Agent reviewedConfidence92%

Original evidence

Evidence 1
We recommend discussing course policies and expectations around their use, and clearly communicating with your students when and in what ways use of generative AI tools are permitted – or not.

Other

Cornell developed a set of course policy icons through the GenAI Advisory Council to help instructors clearly and consistently communicate AI use expectations to students in syllabi and assignment instructions.

Review: Agent reviewedConfidence92%

Original evidence

Evidence 1
A set of course policy icons has been developed by the GenAI Advisory Council to help Cornell instructors communicate with students about appropriate AI use for different courses and assignments. They are downloadable and can be incorporated into your syllabus or assignment instructions, to help provide clear guidance to students about course expectations.

Other

Cornell holds students responsible for verifying the accuracy of AI-generated output and references when AI use is allowed for an assignment.

Review: Agent reviewedConfidence92%

Original evidence

Evidence 1
Discuss with students how generative AI output can be incorrect or problematic and that they are responsible for verifying the output and references if AI use is allowed for an assignment.

Other

Cornell describes measures faculty may use to evaluate potential AI-related academic integrity concerns, including requiring students to verify citations and references, requesting verification of references or methods, and informing students that they should expect to verbally explain submitted work.

Review: Agent reviewedConfidence90%

Original evidence

Evidence 1
Require that students verify the accuracy of all citations and references they include in their work. Request that students provide verification of references or methods, with a student's response determining whether a formal academic integrity notification is warranted. ... Inform and remind students that they should expect to verbally explain the work they submitted.

Other

Cornell's IT guidelines state that use of generative AI in academic research is governed by the Cornell University Task Force Report 'Generative AI in Academic Research: Perspectives and Cultural Norms' (December 2023).

Review: Agent reviewedConfidence90%

Original evidence

Evidence 1
The Cornell University Task Force Report, Generative AI in Academic Research: Perspectives and Cultural Norms (December 2023), offers perspectives and practical guidelines to the Cornell community on the use of generative AI in the practice and dissemination of academic research.

Other

Cornell's IT guidelines state that use of generative AI for administrative purposes must comply with the Cornell Generative AI in Administration Task Force Report (January 2024).

Review: Agent reviewedConfidence90%

Original evidence

Evidence 1
The use of generative AI for administration purposes must comply with the guidelines of the Cornell Generative AI in Administration Task Force Report (January 2024).

Other

Cornell's 'AI+AI' initiative aims to strengthen and update academic integrity procedures to better reflect the presence of generative AI, including better models for attributing student use of GenAI and development of evidentiary standards for adjudicating GenAI-related violations.

Review: Agent reviewedConfidence90%

Original evidence

Evidence 1
'AI+AI': efforts to strengthen and update Cornell's academic integrity procedures to better reflect the presence of GenAI, including better models for attributing student use of GenAI in assignments or course settings; efforts to streamline, strengthen, and reform the university's academic integrity system; and the development of evidentiary standards and processes appropriate to the adjudication of GenAI-related violations of academic integrity.

Other

Cornell considers generative AI literacy essential for both students and faculty, defining it as the ability to understand, evaluate, and critically engage with generative AI technologies.

Review: Agent reviewedConfidence90%

Original evidence

Evidence 1
The need for AI literacy is essential for students and faculty alike. We can think of ethical generative AI literacies as the ability to understand, evaluate, and critically engage with generative AI technologies.

Other

Cornell provides sample syllabus language for an AI-FREE policy that prohibits all generative AI tools, including tools that help reorganize and edit written work, to ensure development of foundational concepts and skills.

Review: Agent reviewedConfidence90%

Original evidence

Evidence 1
To ensure development and mastery of the foundational concepts and skills in this course, the use of generative artificial intelligence (AI) tools is prohibited. This includes tools that help reorganize and edit your written work because the ability to self-assess, reflect on your writing process, and develop your own voice are essential in your growth as a writer.

Other

Cornell provides sample syllabus language for an AS-UA policy where AI use is generally discouraged but permitted for select assignments with proper attribution, requiring students to cite the AI tool creator.

Review: Agent reviewedConfidence90%

Original evidence

Evidence 1
Mastering the essential, foundational concepts of this course takes effort and practice. Accordingly, the use of generative artificial intelligence (AI) tools is generally discouraged in this course, but will be permitted for select assignments. ... If used in any capacity for an assignment, generative AI requires proper attribution for any and all generated work.

Other

Cornell's AI course policy icons include a 'PP' (Privacy Protecting) icon indicating that generative AI use is permitted but no copyrighted or proprietary class materials should be uploaded unless otherwise specified.

Review: Agent reviewedConfidence88%

Original evidence

Evidence 1
PP (Privacy Protecting—No Proprietary Materials): GenAI use permitted, but no uploading of copyrighted or proprietary class materials unless otherwise specified.

Candidate claims

0 machine or needs-review claim

Candidate claims are not final policy conclusions. They preserve source URL, source snapshot hash, evidence, confidence, and review state so the record can be audited before review.

Official sources

6 source attribution

Back to universities