Change log

University of Pennsylvania

Source-check timeline, source snapshot hashes, claim review state, and a diff-style preview of current source-backed claim evidence.

Change summary

Current public record freshness and review state.

University of Pennsylvania currently has 19 source-backed claim records and 6 official source attributions. Latest tracked changed date: May 6, 2026.

This tracker is not legal advice, not academic integrity advice, and not an official university statement unless a linked source is the university's own official page.

Claim/evidence diff preview

Diff-style preview built from current public claim/evidence records. Full old/new source diffs require paired historical snapshots.

University of Pennsylvania current policy evidence

Inserted lines represent current public claim and evidence records in the source-backed dataset.

+20-0
11 # University of Pennsylvania AI policy record
2+other: Penn requires all community members (educators, staff, researchers, and students) to be transparent about the use of AI and to disclose when a work product was created wholly or partially using an AI tool.
3+Evidence (en, 37de46a962d8): Be transparent about the use of AI. Disclose when a work product was created wholly or partially using an AI tool and, if appropriate, how AI was used to create the work product.
4+other: Penn provides several licensed AI tools to its community, including Copilot Chat (Basic, free), Adobe Express (free), ChatGPT-EDU (purchase required), M365 Copilot Premium (purchase required), Gemini for Google Workspace (purchase required), Google NotebookLM (purchase required), Grammarly Pro (purchase required), Snowflake Data Analytics (purchase required), and Zoom AI Companion (free).
5+Evidence (en, 1915e317dd89): Penn offers access to a number of AI tools. The guidelines for protecting student privacy while using these tools are informed by the data risk classification and the privacy agreements of the tool being used.
6+other: Users of AI at Penn are accountable for AI-generated content and should validate its accuracy with trusted first-party sources, being wary of misinformation or hallucinations.
7+Evidence (en, 37de46a962d8): The user of AI should endeavor to validate the accuracy of created content with trusted first party sources and monitor the reliability of that content. Users are accountable for their use of content created by AI and should be wary of misinformation or "hallucinations" by AI tools (e.g., citations to publications or source materials that do not exist or references that otherwise distort the truth).
8+other: Penn users should not input moderate or high-risk Penn data (per the Penn Data Risk Classification) or intellectual property into AI tools without careful consideration of data use policies, a protective contract, and review by Penn's Privacy Office and Office of Information Security.
9+Evidence (en, 37de46a962d8): Users of AI should avoid sharing personal or sensitive data with the tool and should not input moderate or high-risk Penn data as defined by the Penn Data Risk Classification, or intellectual property, without: Careful consideration and understanding of the tool's use of Penn data and the service provider's stated rights to the data, including, but not limited to whether the service provider offers the option to opt-out of using customer's data to train the AI; A contract in place to protect Penn data; and Review by Penn's Privacy Office and consultation with the Office of Information Security as coordinated by Procurement when moderate or high-risk data is involved.
10+other: It is not permissible under HIPAA or Penn Medicine policy to share patient or research participant information with open or public AI tools and services such as ChatGPT; individual patient data and data sets (even if deidentified) may not be exposed to such tools absent institutional approval.
11+Evidence (en, 37de46a962d8): It is not permissible under the Health Information Portability & Accountability Act (HIPAA) or Penn Medicine policy to share patient or research participant information in connection with open or public AI tools and services, such as ChatGPT. This is because, as currently configured, such open or public tools and services can use and share any data without regard to HIPAA restrictions and other protections. Therefore, individual patient data and patient data sets (even if deidentified) may not be exposed to open or public AI tools or services, absent institutional approval.
12+other: Penn researchers should obtain IRB approvals prior to exposing research participant data to AI tools and should exercise caution when research involves high-risk data including PII and health information.
13+Evidence (en, 37de46a962d8): Researchers should adhere to federal or international requirements on obtaining informed consent, and Institutional Review Board approvals should be obtained prior to exposing research participant data to AI tools. Caution should be adopted when research involves the examination of high-risk data, including Personally Identifiable Information (PII) and research participant health information (both identifiable and non-identifiable) exposed to AI.
14+other: Penn's Office of Audit, Compliance & Privacy mandates that users of publicly available (unlicensed) AI tools must not enter any information that could identify a student, including names, ID numbers, email addresses, or detailed descriptions of student work or engagement that could be identifiable to others.
15+Evidence (en, 1915e317dd89): Do not enter any information that could identify a student. This includes names, ID numbers, or email addresses, as well as detailed descriptions of student work or engagement in class that could be identifiable to others.
16+other: Penn mandates that student work (papers, projects) must not be entered into unlicensed AI tools without the student's permission, even if anonymized, because this work is part of the student's confidential academic record.
17+Evidence (en, 1915e317dd89): Do not enter student work (e.g., papers, projects) without the student's permission, even if it is anonymized. This work is part of the student's confidential academic record.
18+other: Instructors must not require students to enter their own work into unlicensed AI tools or use such tools in assignments; unlicensed tools may be used optionally by students at the instructor's discretion, but Penn-licensed tools should be used for mandatory coursework components.
19+Evidence (en, 1915e317dd89): Do not require students to enter their own work into an unlicensed AI tool or use it in assignments. Unlicensed tools may be used optionally by students at the instructor's discretion but consider using a Penn-licensed tool for mandatory components of coursework to protect student data.
20+other: Individual instructors at Penn determine their own policies related to acceptable student use of generative AI in coursework.
21+Evidence (en, 1915e317dd89): Individual instructors determine their own policies related to acceptable student use of generative AI in coursework.

Claim changes

19 claim records

other

Penn requires all community members (educators, staff, researchers, and students) to be transparent about the use of AI and to disclose when a work product was created wholly or partially using an AI tool.

Review: Agent reviewedConfidence95%Evidence1Languagesen

other

Penn provides several licensed AI tools to its community, including Copilot Chat (Basic, free), Adobe Express (free), ChatGPT-EDU (purchase required), M365 Copilot Premium (purchase required), Gemini for Google Workspace (purchase required), Google NotebookLM (purchase required), Grammarly Pro (purchase required), Snowflake Data Analytics (purchase required), and Zoom AI Companion (free).

Review: Agent reviewedConfidence95%Evidence1Languagesen

other

Users of AI at Penn are accountable for AI-generated content and should validate its accuracy with trusted first-party sources, being wary of misinformation or hallucinations.

Review: Agent reviewedConfidence95%Evidence1Languagesen

other

Penn users should not input moderate or high-risk Penn data (per the Penn Data Risk Classification) or intellectual property into AI tools without careful consideration of data use policies, a protective contract, and review by Penn's Privacy Office and Office of Information Security.

Review: Agent reviewedConfidence95%Evidence1Languagesen

other

It is not permissible under HIPAA or Penn Medicine policy to share patient or research participant information with open or public AI tools and services such as ChatGPT; individual patient data and data sets (even if deidentified) may not be exposed to such tools absent institutional approval.

Review: Agent reviewedConfidence95%Evidence1Languagesen

other

Penn researchers should obtain IRB approvals prior to exposing research participant data to AI tools and should exercise caution when research involves high-risk data including PII and health information.

Review: Agent reviewedConfidence90%Evidence1Languagesen

other

Penn's Office of Audit, Compliance & Privacy mandates that users of publicly available (unlicensed) AI tools must not enter any information that could identify a student, including names, ID numbers, email addresses, or detailed descriptions of student work or engagement that could be identifiable to others.

Review: Agent reviewedConfidence90%Evidence1Languagesen

other

Penn mandates that student work (papers, projects) must not be entered into unlicensed AI tools without the student's permission, even if anonymized, because this work is part of the student's confidential academic record.

Review: Agent reviewedConfidence90%Evidence1Languagesen

other

Instructors must not require students to enter their own work into unlicensed AI tools or use such tools in assignments; unlicensed tools may be used optionally by students at the instructor's discretion, but Penn-licensed tools should be used for mandatory coursework components.

Review: Agent reviewedConfidence90%Evidence1Languagesen

other

Individual instructors at Penn determine their own policies related to acceptable student use of generative AI in coursework.

Review: Agent reviewedConfidence90%Evidence1Languagesen

other

Penn community members should avoid uploading confidential or proprietary information to AI platforms prior to seeking patent or copyright protection, as doing so could jeopardize intellectual property rights.

Review: Agent reviewedConfidence90%Evidence1Languagesen

other

University business processes using AI should have oversight, review, and verification of AI outputs in place to ensure reliability, consistency, and accuracy.

Review: Agent reviewedConfidence90%Evidence1Languagesen

other

Penn educators should provide students with clear guidelines on the use of AI within coursework and should disclose to students when course materials have been created with AI or when AI detection software will be used.

Review: Agent reviewedConfidence90%Evidence1Languagesen

other

In the absence of other guidance, Penn students should treat the use of AI as they would treat assistance from another person — if it is unacceptable to have another person substantially complete a task like writing an essay, it is also unacceptable to have AI complete the task.

Review: Agent reviewedConfidence90%Evidence1Languagesen

other

Penn researchers should consult with department leadership and their discipline's publishing standards to determine how AI use should be accounted for with regard to authorship in publications.

Review: Agent reviewedConfidence90%Evidence1Languagesen

other

At Wharton Academy, AI-generated work should be cited like any other reference material, including how and where students used AI-generated information; using AI-generated work without crediting the source is considered plagiarism.

Review: Agent reviewedConfidence85%Evidence1Languagesen

other

Wharton Academy prohibits students from using AI to complete personal reflection or opinion-based tasks, from using AI to complete group assignments instead of collaborating with peers, and from using AI to cheat on exams or tests.

Review: Agent reviewedConfidence85%Evidence1Languagesen

other

Wharton Academy prohibits students from directly copying answers from generative AI tools and submitting them as their own, from using AI to paraphrase or rewrite plagiarized content, and from posting AI-generated discussion posts within course community forums.

Review: Agent reviewedConfidence85%Evidence1Languagesen

other

Wharton Academy permits students to use generative AI for brainstorming, learning efficiency, getting prompts, exploring different perspectives, asking for templates, getting preliminary feedback on written work, and language translation, at the discretion of faculty and instructional teams.

Review: Agent reviewedConfidence80%Evidence1Languagesen

Source snapshots

6 source attributions

Statement on Guidance for the University of Pennsylvania Community on Use of Generative Artificial Intelligence

official_guidance checked May 6, 2026

Snapshot hash
37de46a962d87d00351f27cd01c2b6f1efdd0f012d8073df6f3383a3f2497bb5