academic_integrity
Stanford's BCA issued guidance on generative AI use, and the Office of Community Standards recommends that instructors give advance notice to students when using AI detection software.
Open, evidence-backed AI policy records for public reuse.
Change log
Source-check timeline, source snapshot hashes, claim review state, and a diff-style preview of current source-backed claim evidence.
Current public record freshness and review state.
Stanford University currently has 9 source-backed claim records and 13 official source attributions. Latest tracked changed date: May 6, 2026.
This tracker is not legal advice, not academic integrity advice, and not an official university statement unless a linked source is the university's own official page.
Diff-style preview built from current public claim/evidence records. Full old/new source diffs require paired historical snapshots.
Inserted lines represent current public claim and evidence records in the source-backed dataset.
9 claim records
Stanford's BCA issued guidance on generative AI use, and the Office of Community Standards recommends that instructors give advance notice to students when using AI detection software.
Stanford's Bechtel Center for Advising (BCA) provides guidance on generative AI in the context of the Honor Code, noting that acceptable AI use depends on individual instructor and course policies.
For Stanford Graduate School of Business (GSB) MBA and MSx courses, instructors may not ban student use of AI tools for take-home coursework, including assignments and exams. Instructors may choose whether to allow AI for in-class work. For PhD and undergraduate courses, GSB follows the university-wide Generative AI Policy Guidance from the Office of Community Standards.
Stanford School of Medicine MD and MSPA programs have a formal AI policy: students may use AI for learning, clarification, and grammar/style editing unless contrary to assignment instructions. AI use for closed-book exams or assignments where internet is restricted is prohibited unless explicitly authorized by faculty. Students are responsible for all AI-generated content they submit, must disclose and cite substantial AI contributions, and violations may result in disciplinary action.
Stanford School of Medicine MD and MSPA programs strictly prohibit entering confidential research data, patient data, or protected health information (PHI) into public AI platforms. Use of patient-identifying information or PHI in public AI tools is strictly forbidden. Students must use Stanford-approved AI platforms (e.g., Stanford Healthcare Secure GPT, Stanford AI Playground) when handling sensitive data.
Stanford Law School instructors set their own AI policies; in the absence of a course-specific policy, students may use generative AI to support learning and develop or refine their own ideas, but may not use AI to generate content presented as their own work. Using AI during an exam or to draft/revise submitted work is not permitted unless disclosed in advance and explicitly authorized in writing by the instructor. Unauthorized use may result in an F grade and/or referral to Stanford's Office of Community Standards.
Stanford's Program in Writing and Rhetoric (PWR) prohibits students from using generative AI or LLMs to compose drafts or revisions for any major assignment in PWR courses, including composing or revising portions of essays or scripts, including paraphrases of LLM-generated writing. Students may not rely on generative AI summaries of sources. Violation of PWR's AI policy is treated as an Honor Code violation and results in referral to the Office of Community Standards.
Stanford University IT (UIT) advises users to avoid inputting Moderate or High Risk Data into third-party AI platforms or tools not covered by a Stanford Business Associates Agreement, whether using a personal or Stanford account. Users should opt out of sharing chat data with third-party AI providers when possible.
Stanford University Communications (UComm) has issued AI guidelines for marketing and communications staff requiring: human oversight of all AI-generated content (non-delegable personal responsibility), adherence to university policies, prohibition on inputting confidential or legally privileged information into generative AI tools, prohibition on using AI to promote for-profit organizations or engage in political advocacy, and prohibition on using high-risk data in prompts. Stanford AI Playground is recommended as the primary platform. These guidelines apply to all regular staff, interns, casual employees, and consultants in marketing and communications functions.
13 source attributions
official_policy_page checked May 6, 2026
official_guidance checked May 6, 2026
official_policy_page checked May 6, 2026
official_guidance checked May 6, 2026
official_guidance checked May 6, 2026
official_guidance checked May 6, 2026
official_guidance checked May 6, 2026
official_policy_page checked May 6, 2026
official_policy_page checked May 6, 2026
official_policy_page checked May 6, 2026
official_guidance checked May 6, 2026
official_guidance checked May 6, 2026
official_policy_page checked May 6, 2026