other
Penn requires all community members (educators, staff, researchers, and students) to be transparent about the use of AI and to disclose when a work product was created wholly or partially using an AI tool.
Open, evidence-backed AI policy records for public reuse.
Change log
Source-check timeline, source snapshot hashes, claim review state, and a diff-style preview of current source-backed claim evidence.
Current public record freshness and review state.
University of Pennsylvania currently has 19 source-backed claim records and 6 official source attributions. Latest tracked changed date: May 6, 2026.
This tracker is not legal advice, not academic integrity advice, and not an official university statement unless a linked source is the university's own official page.
Diff-style preview built from current public claim/evidence records. Full old/new source diffs require paired historical snapshots.
Inserted lines represent current public claim and evidence records in the source-backed dataset.
19 claim records
Penn requires all community members (educators, staff, researchers, and students) to be transparent about the use of AI and to disclose when a work product was created wholly or partially using an AI tool.
Penn provides several licensed AI tools to its community, including Copilot Chat (Basic, free), Adobe Express (free), ChatGPT-EDU (purchase required), M365 Copilot Premium (purchase required), Gemini for Google Workspace (purchase required), Google NotebookLM (purchase required), Grammarly Pro (purchase required), Snowflake Data Analytics (purchase required), and Zoom AI Companion (free).
Users of AI at Penn are accountable for AI-generated content and should validate its accuracy with trusted first-party sources, being wary of misinformation or hallucinations.
Penn users should not input moderate or high-risk Penn data (per the Penn Data Risk Classification) or intellectual property into AI tools without careful consideration of data use policies, a protective contract, and review by Penn's Privacy Office and Office of Information Security.
It is not permissible under HIPAA or Penn Medicine policy to share patient or research participant information with open or public AI tools and services such as ChatGPT; individual patient data and data sets (even if deidentified) may not be exposed to such tools absent institutional approval.
Penn researchers should obtain IRB approvals prior to exposing research participant data to AI tools and should exercise caution when research involves high-risk data including PII and health information.
Penn's Office of Audit, Compliance & Privacy mandates that users of publicly available (unlicensed) AI tools must not enter any information that could identify a student, including names, ID numbers, email addresses, or detailed descriptions of student work or engagement that could be identifiable to others.
Penn mandates that student work (papers, projects) must not be entered into unlicensed AI tools without the student's permission, even if anonymized, because this work is part of the student's confidential academic record.
Instructors must not require students to enter their own work into unlicensed AI tools or use such tools in assignments; unlicensed tools may be used optionally by students at the instructor's discretion, but Penn-licensed tools should be used for mandatory coursework components.
Individual instructors at Penn determine their own policies related to acceptable student use of generative AI in coursework.
Penn community members should avoid uploading confidential or proprietary information to AI platforms prior to seeking patent or copyright protection, as doing so could jeopardize intellectual property rights.
University business processes using AI should have oversight, review, and verification of AI outputs in place to ensure reliability, consistency, and accuracy.
Penn educators should provide students with clear guidelines on the use of AI within coursework and should disclose to students when course materials have been created with AI or when AI detection software will be used.
In the absence of other guidance, Penn students should treat the use of AI as they would treat assistance from another person — if it is unacceptable to have another person substantially complete a task like writing an essay, it is also unacceptable to have AI complete the task.
Penn researchers should consult with department leadership and their discipline's publishing standards to determine how AI use should be accounted for with regard to authorship in publications.
At Wharton Academy, AI-generated work should be cited like any other reference material, including how and where students used AI-generated information; using AI-generated work without crediting the source is considered plagiarism.
Wharton Academy prohibits students from using AI to complete personal reflection or opinion-based tasks, from using AI to complete group assignments instead of collaborating with peers, and from using AI to cheat on exams or tests.
Wharton Academy prohibits students from directly copying answers from generative AI tools and submitting them as their own, from using AI to paraphrase or rewrite plagiarized content, and from posting AI-generated discussion posts within course community forums.
Wharton Academy permits students to use generative AI for brainstorming, learning efficiency, getting prompts, exploring different perspectives, asking for templates, getting preliminary feedback on written work, and language translation, at the discretion of faculty and instructional teams.
6 source attributions
official_guidance checked May 6, 2026
official_guidance checked May 6, 2026
official_guidance checked May 6, 2026
official_policy_page checked May 6, 2026
official_guidance checked May 6, 2026
official_guidance checked May 6, 2026