11 # Princeton University AI policy record
2+privacy: Princeton University requires that only University-licensed generative AI tools should be used with University Information classified as Internal or Confidential, and the use of publicly available generative AI tools in conjunction with such Princeton Information is not permitted by the University.
3+Evidence (en, 6ec973c33d6a): University Information classified as Internal or Confidential under the University's Information Security Policy. The use of publicly available generative AI tools in conjunction with such Princeton Information is not permitted by the University.
4+academic_integrity: Princeton University requires students to disclose the use of generative AI when permitted by the instructor, rather than cite or acknowledge the use, since generative AI is an algorithm rather than a source (Rights, Rules, Responsibilities section 2.4.7).
5+Evidence (en, 40e8a1bd0cfb): As defined in section 2.4.7 , generative artificial intelligence (AI) is not a source, since its output is not produced by a person. If generative AI is permitted by the instructor (for brainstorming, outlining, etc.), students must disclose its use rather than cite or acknowledge the use, since it is an algorithm rather than a source. All the tenets of scholarly integrity apply to use of generative AI: students must not pass off any output by generative AI as their own, and so a failure of disclosure, even in a course where generative AI is permitted, is a scholarly integrity violation.
6+academic_integrity: Princeton University states that inappropriate uses of generative AI on any work submitted to fulfill an academic requirement, including directly copying the output, representing output as the student's own, exceeding instructor parameters, or failing to disclose its use, would constitute violations of academic integrity (Rights, Rules, Responsibilities section 2.4.6).
7+Evidence (en, 40e8a1bd0cfb): Students are responsible for familiarizing themselves with and adhering to course and departmental policies regarding the use of generative AI. Inappropriate uses of the results of generative AI on any work submitted to fulfill an academic requirement, such as directly copying the output, representing output generated by or derived from generative AI as their own, exceeding the parameters specified by the instructor, or failing to disclose its use, would constitute violations of academic integrity.
8+teaching: Princeton University states that the decision to allow, limit, or prohibit generative AI in a course or in undergraduate independent work remains with the faculty; faculty members have the discretion to set their own generative AI policy for their courses.
9+Evidence (en, 31268a5979e6): First and foremost, the decision to allow, limit, or prohibit generative AI in a course or in undergraduate independent work will remain our faculty's. Faculty members have the discretion to set their own generative AI policy for their courses.
10+teaching: Princeton University requires faculty to set clear expectations for whether, when, and how generative AI can be used and state those expectations in the course syllabus.
11+Evidence (en, 147b153f2061): Faculty must set clear expectations for whether, when, and how generative AI can be used and state those expectations in the course syllabus. Work created with the assistance of AI tools should never be a proxy for original work.
12+teaching: Princeton University's McGraw Center for Teaching and Learning recommends that faculty do not use AI detection software to determine if student work is AI-generated, stating that detection tools are unreliable and biased.
13+Evidence (en, 2707a96d56f7): Though companies like Turnitin, ZeroGPT, and OpenAI have all developed AI detection capabilities, we do not recommend you use such software to attempt to determine if student work is AI-generated. Our recommendation against using these tools is based both on Princeton's standards for scholarly integrity and the practical limits of these tools. Detection tools seem unreliable at best and biased at worst.
14+ai_tool_treatment: Princeton University's Office of Information Technology states that Microsoft Copilot is currently the only generative AI tool made available by OIT, and that when logged in with a Princeton University account, Copilot provides Enterprise Data Protection where prompts and responses are not used to train the underlying large language models and chat data is encrypted.
15+Evidence (en, c7f89bb594fe): Copilot is currently the only generative AI tool made available by the Office of Information Technology (OIT). When you are logged in to Copilot with your Princeton University account, you are using Copilot with Enterprise Data Protection which better protects information.
16+privacy: Princeton University's OIT guidance states that non-public Princeton data should not be used in public generative AI tools, and that University Information classified as Restricted must not be used with any AI tool.
17+Evidence (en, e8dab856f2ce): Don't use non-public Princeton data in public Gen AI tools. University Information classified as Restricted must not be used with any AI tool.