Stanford, United States

Stanford University

Stanford University is listed as QS 2026 rank 3. Stanford University has 9 source-backed AI policy claim records from 13 official source attributions. The public record preserves original-language evidence snippets, source URLs, snapshot hashes, confidence, and review state.

Citation-ready overview

v1 public contract

Stanford University is listed as QS 2026 rank 3. Stanford University has 9 source-backed AI policy claim records from 13 official source attributions. The public record preserves original-language evidence snippets, source URLs, snapshot hashes, confidence, and review state.

Reviewed claims9Candidate claims0Official sources13

Candidate claims are source-backed records pending review. They are not final policy conclusions and are not legal or academic integrity advice.

Reviewed claims

9 reviewed public claim

Academic Integrity

Stanford's BCA issued guidance on generative AI use, and the Office of Community Standards recommends that instructors give advance notice to students when using AI detection software.

Review: Agent reviewedConfidence95%

Original evidence

Evidence 1
To give sufficient space for instructors to explore uses of generative AI tools in their courses, and to set clear guidelines to students about what uses are and are not consistent with the Stanford Honor Code, the BCA has set forth policy guidance regarding generative AI in the context of coursework. Note: Guidance adopted on February 16, 2023. As part of the BCA's guidance on clear communication of a course's generative AI policy, OCS recommends course instructors provide clear advance notice that they may use detection software to review work submitted for use of generative AI.

Teaching

For Stanford Graduate School of Business (GSB) MBA and MSx courses, instructors may not ban student use of AI tools for take-home coursework, including assignments and exams. Instructors may choose whether to allow AI for in-class work. For PhD and undergraduate courses, GSB follows the university-wide Generative AI Policy Guidance from the Office of Community Standards.

Review: Agent reviewedConfidence95%

Original evidence

Evidence 1
GSB Policy MBA/MSx courses: Instructors may not ban student use of AI tools for take-home coursework, including assignments and exams. Instructors may choose whether to allow student use of AI tools for in-class work, including exams. PhD/Undergraduate courses: Follow the Generative AI Policy Guidance from Stanford's Office of Community Standards.

Teaching

Stanford School of Medicine MD and MSPA programs have a formal AI policy: students may use AI for learning, clarification, and grammar/style editing unless contrary to assignment instructions. AI use for closed-book exams or assignments where internet is restricted is prohibited unless explicitly authorized by faculty. Students are responsible for all AI-generated content they submit, must disclose and cite substantial AI contributions, and violations may result in disciplinary action.

Review: Agent reviewedConfidence95%

Original evidence

Evidence 1
Learning and Clarification: Students may utilize AI to enhance their understanding of medical concepts, definitions, and for grammar/style editing, provided it does not conflict with specific assignment instructions or be inconsistent with faculty-authorized tasks. The use of AI tools is discouraged for any activities in which students are evaluated on their own knowledge or skills, unless explicitly granted permission by the faculty. Any substantial contributions from AI tools on assignments, presentations, and scholarly abstracts or proposals must be disclosed and properly cited. Students are responsible for any AI-generated content they submit, even if flawed or biased. All MD and MSPA...

Privacy

Stanford School of Medicine MD and MSPA programs strictly prohibit entering confidential research data, patient data, or protected health information (PHI) into public AI platforms. Use of patient-identifying information or PHI in public AI tools is strictly forbidden. Students must use Stanford-approved AI platforms (e.g., Stanford Healthcare Secure GPT, Stanford AI Playground) when handling sensitive data.

Review: Agent reviewedConfidence95%

Original evidence

Evidence 1
To ensure that patient confidentiality is maintained at all times, no confidential research, patient data, or other protected health information (PHI) is to be entered into public AI platforms (e.g., ChatGPT, Gemini, OpenEvidence, Doximity, etc.). Patient Confidential Data: The use of patient-identifying information or protected health information (PHI) in public AI tools is strictly forbidden.

Teaching

Stanford Law School instructors set their own AI policies; in the absence of a course-specific policy, students may use generative AI to support learning and develop or refine their own ideas, but may not use AI to generate content presented as their own work. Using AI during an exam or to draft/revise submitted work is not permitted unless disclosed in advance and explicitly authorized in writing by the instructor. Unauthorized use may result in an F grade and/or referral to Stanford's Office of Community Standards.

Review: Agent reviewedConfidence95%

Original evidence

Evidence 1
In the absence of a course-specific AI policy set by the instructor, students may use generative AI tools to support their learning and to aid in the development or refinement of their own ideas, provided they do not use such tools to generate content that is then presented as their own work. The use of generative AI while taking an exam or to draft or revise any portion of submitted work is not permitted unless (1) fully disclosed in advance of its use by the student to the instructor, and (2) explicitly authorized by the instructor in writing prior to the student's use of the tool. Unauthorized use of AI tools may result in a lower grade, including a grade of F, and/or a referral to Sta...

Teaching

Stanford's Program in Writing and Rhetoric (PWR) prohibits students from using generative AI or LLMs to compose drafts or revisions for any major assignment in PWR courses, including composing or revising portions of essays or scripts, including paraphrases of LLM-generated writing. Students may not rely on generative AI summaries of sources. Violation of PWR's AI policy is treated as an Honor Code violation and results in referral to the Office of Community Standards.

Review: Agent reviewedConfidence95%

Original evidence

Evidence 1
Students may not use generative AI or LLMs to compose the drafts or revisions for any of the major assignments in their PWR classes. This includes using generative AI to compose or revise portions of their essay or scripts (from individual phrases or sentences to longer passages) or including in their essays/scripts paraphrases of LLM-generated writing or paraphrases of source material generated by LLMs. Violation of PWR's AI policy is considered an Honor Code violation and will result in the involvement of Stanford's Office of Community Standards (OCS).

Academic Integrity

Stanford's Bechtel Center for Advising (BCA) provides guidance on generative AI in the context of the Honor Code, noting that acceptable AI use depends on individual instructor and course policies.

Review: Agent reviewedConfidence90%

Original evidence

Evidence 1
To give sufficient space for instructors to explore uses of generative AI tools in their courses, and to set clear guidelines to students about what uses are and are not consistent with the Stanford Honor Code, the BCA has set forth policy guidance regarding generative AI in the context of coursework. Note: Guidance adopted on February 16, 2023.

Other

Stanford University IT (UIT) advises users to avoid inputting Moderate or High Risk Data into third-party AI platforms or tools not covered by a Stanford Business Associates Agreement, whether using a personal or Stanford account. Users should opt out of sharing chat data with third-party AI providers when possible.

Review: Agent reviewedConfidence90%

Original evidence

Evidence 1
Avoid inputting any sensitive data, such as Moderate or High Risk Data, whether using a personal or Stanford account with a third-party AI platform or tool that is not covered by a Stanford Business Associates Agreement. It's recommended to opt out of sharing data for AI iterative learning wherever possible. If generative AI is to be used to interact with users, obtain their informed consent. If your final product is significantly influenced by an AI platform, consider informing people how you used AI and cite appropriately.

Other

Stanford University Communications (UComm) has issued AI guidelines for marketing and communications staff requiring: human oversight of all AI-generated content (non-delegable personal responsibility), adherence to university policies, prohibition on inputting confidential or legally privileged information into generative AI tools, prohibition on using AI to promote for-profit organizations or engage in political advocacy, and prohibition on using high-risk data in prompts. Stanford AI Playground is recommended as the primary platform. These guidelines apply to all regular staff, interns, casual employees, and consultants in marketing and communications functions.

Review: Agent reviewedConfidence90%

Original evidence

Evidence 1
These guidelines apply to all regular staff, interns, casual employees, and consultants. You are personally responsible for oversight of any content you produce using AI to ensure that content is accurate, in alignment with institutional values, and in compliance with the policies set forth in this document. You are personally responsible for conforming to this requirement and this responsibility may not be delegated to another employee. You may not provide any confidential or legally privileged information of Stanford or a third party to generative AI tools. Do not use high-risk data in your prompts or include such data as attachments to your prompts.

Candidate claims

0 machine or needs-review claim

Candidate claims are not final policy conclusions. They preserve source URL, source snapshot hash, evidence, confidence, and review state so the record can be audited before review.

Official sources

13 source attribution

AI Meets Education at Stanford (AIMES)

ctl.stanford.edu

Snapshot hash
57e3fd0e204837f5675b4d7574bc0e80c8a3ea77ed35f25d633d980877f8d2f9

AI Playground Quick Start Guide - University IT

uit.stanford.edu

Snapshot hash
56d7c59c0b23d411ad4aa7ec660eadd239c0a02c6f4bbf186cd75e79054971e6

University IT AI Hub

uit.stanford.edu

Snapshot hash
4a81586fe631363a2723400e8cb208bdae6a10b4967630ca57c30a7c94b76005
Back to universities