11 # Harvard University AI policy record
2+other: University-wide: Level 2 and above confidential data (including non-public research data, finance, HR, student records, medical information) should not be entered into publicly-available generative AI tools. Such data may only be entered into generative AI tools that have been assessed and approved by Harvard's Information Security and Data Privacy office.
3+Evidence (en, 9d196aae4d26): You should not enter data classified as confidential (Level 2 and above, including non-public research data, finance, HR, student records, medical information, etc.) into publicly-available generative AI tools, in accordance with the University's Information Security Policy. Information shared with generative AI tools using default settings is not private and could expose proprietary or sensitive information to unauthorized parties. Level 2 and above confidential data must only be entered into generative AI tools that have been assessed and approved for such use by Harvard's Information Security and Data Privacy office.
4+teaching: FAS (Faculty of Arts and Sciences) Office of Undergraduate Education policy: All faculty are required to inform students of the policies governing generative AI use in class. Faculty should post their AI policy on their Canvas site.
5+Evidence (en, fb0bf75a8ed5): All faculty are required to inform students of the policies governing generative AI use in class. ... Once you decide on a policy, make sure you articulate it clearly for your students, so that they know what is expected of them. More specifically, you should post your policy on your Canvas site.
6+procurement: University-wide: All vendor generative AI tools not currently offered by HUIT must be assessed for risk by Harvard's Information Security and Data Privacy office prior to use in Harvard work. Contact HUIT before procuring any generative AI tool.
7+Evidence (en, 9d196aae4d26): If you are considering procuring a generative AI tool not currently offered or have questions, please contact HUIT. All vendor generative AI tools must be assessed for risk by Harvard's Information Security and Data Privacy office prior to use in Harvard work.
8+privacy: University-wide: AI meeting assistants (AI note takers or bots) should not be used in Harvard meetings, with the exception of approved tools with contractual protections including enterprise agreements with appropriate security and privacy protections, or tools as part of limited HUIT-directed pilot programs.
9+Evidence (en, 4a4ff250f7ca): AI meeting assistants should not be used in Harvard meetings, with the exception of approved tools with contractual protections: Use only AI assistants for which Harvard has an enterprise agreement with the vendor including appropriate security and privacy protections, including: Approved tools as part of limited HUIT-directed pilot programs to evaluate the use of AI assistants within the Harvard environment.
10+other: University-wide: Users are responsible for any content they publish or share that includes AI-generated material. AI-generated content may be inaccurate, misleading, entirely fabricated (hallucinations), or contain copyrighted material.
11+Evidence (en, 9d196aae4d26): AI-generated content can be inaccurate, misleading, or entirely fabricated (sometimes called "hallucinations") or may contain copyrighted material. You are responsible for any content that you publish or share that includes AI-generated material.
12+academic_integrity: HGSE (Harvard Graduate School of Education) school-level policy: Unless otherwise specified by the instructor, using generative AI to create all or part of an assignment (e.g., paper, memo, presentation, short response) and submitting it as one's own work violates the HGSE Academic Integrity Policy. Permissible uses include seeking clarification on concepts, brainstorming ideas, or generating scenarios that help contextualize learning.
13+Evidence (en, 3e6fca239176): Unless otherwise specified by your instructor, it is a violation of the HGSE Academic Integrity Policy to use generative AI to create all or part of an assignment for a course (e.g., a paper, memo, presentation, or short response) and submit it as your own. Permissible uses of generative AI in HGSE coursework include seeking clarification on concepts, brainstorming ideas, or generating scenarios that help contextualize what you are learning.
14+academic_integrity: HGSE (Harvard Graduate School of Education) school-level policy: For any permitted use of generative AI tools, students must acknowledge and document that use in their assignment submission by explaining what tool(s) were used, prompts provided, and how the output was integrated into the work. Direct citations must use proper citation format.
15+Evidence (en, 3e6fca239176): For any permitted use of GenAI tools, you must acknowledge and document that use in your assignment submission by explaining what tool(s) you used, prompts you provided (if applicable), and how you integrated the output into your work. If you cite directly from the tool, use proper citation format to credit the source.
16+privacy: HGSE (Harvard Graduate School of Education) school-level policy: It is forbidden to make personal recordings of any course meetings, with or without AI tool integrations. Uploading substantial course content is only allowable through the Harvard-approved AI Sandbox.
17+Evidence (en, 3e6fca239176): It is forbidden to make your own recording of any course meetings, with or without AI tool integrations. If you require or would prefer that course meetings be recorded, discuss this request with your instructor. Uploading any substantial course content — including text, video, readings, discussion-board pages, or audio recordings — is only allowable through the Harvard-approved AI Sandbox.
18+privacy: FAS (Faculty of Arts and Sciences) Office of Undergraduate Education guidance: Faculty must get documented permission from students before putting original student content into any generative AI tool. No confidential information can be loaded into generative AI systems since there is no expectation of privacy or confidentiality.
19+Evidence (en, fb0bf75a8ed5): Faculty must get documented permission from students before putting original student content into any generative AI tool, and students should be made aware of the risks of entering their original work into such tools. No confidential information can be loaded into GAI systems, since there is no expectation of privacy or confidentiality.
20+academic_integrity: HMS (Harvard Medical School) Academic and Research Integrity guidance: AI tools cannot be listed as authors on a paper. Authors should be transparent when AI tools are used and provide information about how AI tools were used.
21+Evidence (en, 5a0b3ca35d6c): AI Tools cannot be listed as an author on a paper. Authors should be transparent when AI tools are used and provide information about how AI tools were used.