Philadelphia, United States

University of Pennsylvania

University of Pennsylvania is listed as QS 2026 rank 15. University of Pennsylvania has 19 source-backed AI policy claim records from 6 official source attributions. The public record preserves original-language evidence snippets, source URLs, snapshot hashes, confidence, and review state.

Citation-ready overview

v1 public contract

University of Pennsylvania is listed as QS 2026 rank 15. University of Pennsylvania has 19 source-backed AI policy claim records from 6 official source attributions. The public record preserves original-language evidence snippets, source URLs, snapshot hashes, confidence, and review state.

Reviewed claims19Candidate claims0Official sources6

Candidate claims are source-backed records pending review. They are not final policy conclusions and are not legal or academic integrity advice.

Reviewed claims

19 reviewed public claim

Other

Penn requires all community members (educators, staff, researchers, and students) to be transparent about the use of AI and to disclose when a work product was created wholly or partially using an AI tool.

Review: Agent reviewedConfidence95%

Original evidence

Evidence 1
Be transparent about the use of AI. Disclose when a work product was created wholly or partially using an AI tool and, if appropriate, how AI was used to create the work product.

Other

Penn provides several licensed AI tools to its community, including Copilot Chat (Basic, free), Adobe Express (free), ChatGPT-EDU (purchase required), M365 Copilot Premium (purchase required), Gemini for Google Workspace (purchase required), Google NotebookLM (purchase required), Grammarly Pro (purchase required), Snowflake Data Analytics (purchase required), and Zoom AI Companion (free).

Review: Agent reviewedConfidence95%

Original evidence

Evidence 1
Penn offers access to a number of AI tools. The guidelines for protecting student privacy while using these tools are informed by the data risk classification and the privacy agreements of the tool being used.

Other

Users of AI at Penn are accountable for AI-generated content and should validate its accuracy with trusted first-party sources, being wary of misinformation or hallucinations.

Review: Agent reviewedConfidence95%

Original evidence

Evidence 1
The user of AI should endeavor to validate the accuracy of created content with trusted first party sources and monitor the reliability of that content. Users are accountable for their use of content created by AI and should be wary of misinformation or "hallucinations" by AI tools (e.g., citations to publications or source materials that do not exist or references that otherwise distort the truth).

Other

Penn users should not input moderate or high-risk Penn data (per the Penn Data Risk Classification) or intellectual property into AI tools without careful consideration of data use policies, a protective contract, and review by Penn's Privacy Office and Office of Information Security.

Review: Agent reviewedConfidence95%

Original evidence

Evidence 1
Users of AI should avoid sharing personal or sensitive data with the tool and should not input moderate or high-risk Penn data as defined by the Penn Data Risk Classification, or intellectual property, without: Careful consideration and understanding of the tool's use of Penn data and the service provider's stated rights to the data, including, but not limited to whether the service provider offers the option to opt-out of using customer's data to train the AI; A contract in place to protect Penn data; and Review by Penn's Privacy Office and consultation with the Office of Information Security as coordinated by Procurement when moderate or high-risk data is involved.

Other

It is not permissible under HIPAA or Penn Medicine policy to share patient or research participant information with open or public AI tools and services such as ChatGPT; individual patient data and data sets (even if deidentified) may not be exposed to such tools absent institutional approval.

Review: Agent reviewedConfidence95%

Original evidence

Evidence 1
It is not permissible under the Health Information Portability & Accountability Act (HIPAA) or Penn Medicine policy to share patient or research participant information in connection with open or public AI tools and services, such as ChatGPT. This is because, as currently configured, such open or public tools and services can use and share any data without regard to HIPAA restrictions and other protections. Therefore, individual patient data and patient data sets (even if deidentified) may not be exposed to open or public AI tools or services, absent institutional approval.

Other

Penn researchers should obtain IRB approvals prior to exposing research participant data to AI tools and should exercise caution when research involves high-risk data including PII and health information.

Review: Agent reviewedConfidence90%

Original evidence

Evidence 1
Researchers should adhere to federal or international requirements on obtaining informed consent, and Institutional Review Board approvals should be obtained prior to exposing research participant data to AI tools. Caution should be adopted when research involves the examination of high-risk data, including Personally Identifiable Information (PII) and research participant health information (both identifiable and non-identifiable) exposed to AI.

Other

Penn's Office of Audit, Compliance & Privacy mandates that users of publicly available (unlicensed) AI tools must not enter any information that could identify a student, including names, ID numbers, email addresses, or detailed descriptions of student work or engagement that could be identifiable to others.

Review: Agent reviewedConfidence90%

Original evidence

Evidence 1
Do not enter any information that could identify a student. This includes names, ID numbers, or email addresses, as well as detailed descriptions of student work or engagement in class that could be identifiable to others.

Other

Penn mandates that student work (papers, projects) must not be entered into unlicensed AI tools without the student's permission, even if anonymized, because this work is part of the student's confidential academic record.

Review: Agent reviewedConfidence90%

Original evidence

Evidence 1
Do not enter student work (e.g., papers, projects) without the student's permission, even if it is anonymized. This work is part of the student's confidential academic record.

Other

Instructors must not require students to enter their own work into unlicensed AI tools or use such tools in assignments; unlicensed tools may be used optionally by students at the instructor's discretion, but Penn-licensed tools should be used for mandatory coursework components.

Review: Agent reviewedConfidence90%

Original evidence

Evidence 1
Do not require students to enter their own work into an unlicensed AI tool or use it in assignments. Unlicensed tools may be used optionally by students at the instructor's discretion but consider using a Penn-licensed tool for mandatory components of coursework to protect student data.

Other

Individual instructors at Penn determine their own policies related to acceptable student use of generative AI in coursework.

Review: Agent reviewedConfidence90%

Original evidence

Evidence 1
Individual instructors determine their own policies related to acceptable student use of generative AI in coursework.

Other

Penn community members should avoid uploading confidential or proprietary information to AI platforms prior to seeking patent or copyright protection, as doing so could jeopardize intellectual property rights.

Review: Agent reviewedConfidence90%

Original evidence

Evidence 1
Members of the Penn community should adhere to established principles of respect for intellectual property, particularly copyrights when considering the creation of new data sets for training AI models. Avoid uploading confidential and/or proprietary information to AI platforms prior to seeking patent or copyright protection, as doing so could jeopardize IP rights.

Other

University business processes using AI should have oversight, review, and verification of AI outputs in place to ensure reliability, consistency, and accuracy.

Review: Agent reviewedConfidence90%

Original evidence

Evidence 1
While automating tasks using AI may improve operational efficiency for University Business processes, oversight and review of the use of AI and verification of its outputs for these University business processes should be in place to ensure reliability, consistency, and accuracy.

Other

Penn educators should provide students with clear guidelines on the use of AI within coursework and should disclose to students when course materials have been created with AI or when AI detection software will be used.

Review: Agent reviewedConfidence90%

Original evidence

Evidence 1
As expectations may vary between classes and instructors, it is important for instructors to provide students with clear guidelines similar to the guidelines on collaboration, on the use of AI within coursework, and when and how the use of AI within a course should be cited. Disclose to students when course materials have been created with the use of AI and when AI detection software will be used in the course.

Other

In the absence of other guidance, Penn students should treat the use of AI as they would treat assistance from another person — if it is unacceptable to have another person substantially complete a task like writing an essay, it is also unacceptable to have AI complete the task.

Review: Agent reviewedConfidence90%

Original evidence

Evidence 1
In the absence of other guidance, treat the use of AI as you would treat assistance from another person. For example, this means if it is unacceptable to have another person substantially complete a task like writing an essay, it is also unacceptable to have AI to complete the task.

Other

Penn researchers should consult with department leadership and their discipline's publishing standards to determine how AI use should be accounted for with regard to authorship in publications.

Review: Agent reviewedConfidence90%

Original evidence

Evidence 1
Consult with your department leadership and your discipline's publishing standards to determine how the use of AI should be accounted for with regard to authorship in publications.

Other

At Wharton Academy, AI-generated work should be cited like any other reference material, including how and where students used AI-generated information; using AI-generated work without crediting the source is considered plagiarism.

Review: Agent reviewedConfidence85%

Original evidence

Evidence 1
Using somebody else's work without crediting the source – including generative AI — is plagiarism. Guided by the policies of faculty, instructional teams, and staff, AI-generated work should be cited like any other reference material, including how and where students used AI-generated information.

Other

Wharton Academy prohibits students from using AI to complete personal reflection or opinion-based tasks, from using AI to complete group assignments instead of collaborating with peers, and from using AI to cheat on exams or tests.

Review: Agent reviewedConfidence85%

Original evidence

Evidence 1
Don't use AI for personal reflection or opinion-based tasks. We are interested in hearing your opinions, your stories and your thoughts. Don't rely on AI for group assignments. Using AI to complete your portion of a group project instead of collaborating with your peers is considered academic dishonesty. Don't use AI to cheat on exams or tests.

Other

Wharton Academy prohibits students from directly copying answers from generative AI tools and submitting them as their own, from using AI to paraphrase or rewrite plagiarized content, and from posting AI-generated discussion posts within course community forums.

Review: Agent reviewedConfidence85%

Original evidence

Evidence 1
Don't directly copy answers from generative AI tools and submit as your own. Don't use AI to paraphrase or rewrite plagiarized content. Don't post AI-generated discussion posts within the course community forum.

Other

Wharton Academy permits students to use generative AI for brainstorming, learning efficiency, getting prompts, exploring different perspectives, asking for templates, getting preliminary feedback on written work, and language translation, at the discretion of faculty and instructional teams.

Review: Agent reviewedConfidence80%

Original evidence

Evidence 1
At the discretion of faculty, instructional teams and staff, Wharton Academy students may use generative AI tools.

Candidate claims

0 machine or needs-review claim

Candidate claims are not final policy conclusions. They preserve source URL, source snapshot hash, evidence, confidence, and review state so the record can be audited before review.

Official sources

6 source attribution

Statement on Guidance for the University of Pennsylvania Community on Use of Generative Artificial Intelligence

isc.upenn.edu

Snapshot hash
37de46a962d87d00351f27cd01c2b6f1efdd0f012d8073df6f3383a3f2497bb5
Back to universities