Change log

Harvard University

Source-check timeline, source snapshot hashes, claim review state, and a diff-style preview of current source-backed claim evidence.

Change summary

Current public record freshness and review state.

Harvard University currently has 12 source-backed claim records and 12 official source attributions. Latest tracked changed date: May 6, 2026.

This tracker is not legal advice, not academic integrity advice, and not an official university statement unless a linked source is the university's own official page.

Claim/evidence diff preview

Diff-style preview built from current public claim/evidence records. Full old/new source diffs require paired historical snapshots.

Harvard University current policy evidence

Inserted lines represent current public claim and evidence records in the source-backed dataset.

+20-0
11 # Harvard University AI policy record
2+other: University-wide: Level 2 and above confidential data (including non-public research data, finance, HR, student records, medical information) should not be entered into publicly-available generative AI tools. Such data may only be entered into generative AI tools that have been assessed and approved by Harvard's Information Security and Data Privacy office.
3+Evidence (en, 9d196aae4d26): You should not enter data classified as confidential (Level 2 and above, including non-public research data, finance, HR, student records, medical information, etc.) into publicly-available generative AI tools, in accordance with the University's Information Security Policy. Information shared with generative AI tools using default settings is not private and could expose proprietary or sensitive information to unauthorized parties. Level 2 and above confidential data must only be entered into generative AI tools that have been assessed and approved for such use by Harvard's Information Security and Data Privacy office.
4+teaching: FAS (Faculty of Arts and Sciences) Office of Undergraduate Education policy: All faculty are required to inform students of the policies governing generative AI use in class. Faculty should post their AI policy on their Canvas site.
5+Evidence (en, fb0bf75a8ed5): All faculty are required to inform students of the policies governing generative AI use in class. ... Once you decide on a policy, make sure you articulate it clearly for your students, so that they know what is expected of them. More specifically, you should post your policy on your Canvas site.
6+procurement: University-wide: All vendor generative AI tools not currently offered by HUIT must be assessed for risk by Harvard's Information Security and Data Privacy office prior to use in Harvard work. Contact HUIT before procuring any generative AI tool.
7+Evidence (en, 9d196aae4d26): If you are considering procuring a generative AI tool not currently offered or have questions, please contact HUIT. All vendor generative AI tools must be assessed for risk by Harvard's Information Security and Data Privacy office prior to use in Harvard work.
8+privacy: University-wide: AI meeting assistants (AI note takers or bots) should not be used in Harvard meetings, with the exception of approved tools with contractual protections including enterprise agreements with appropriate security and privacy protections, or tools as part of limited HUIT-directed pilot programs.
9+Evidence (en, 4a4ff250f7ca): AI meeting assistants should not be used in Harvard meetings, with the exception of approved tools with contractual protections: Use only AI assistants for which Harvard has an enterprise agreement with the vendor including appropriate security and privacy protections, including: Approved tools as part of limited HUIT-directed pilot programs to evaluate the use of AI assistants within the Harvard environment.
10+other: University-wide: Users are responsible for any content they publish or share that includes AI-generated material. AI-generated content may be inaccurate, misleading, entirely fabricated (hallucinations), or contain copyrighted material.
11+Evidence (en, 9d196aae4d26): AI-generated content can be inaccurate, misleading, or entirely fabricated (sometimes called "hallucinations") or may contain copyrighted material. You are responsible for any content that you publish or share that includes AI-generated material.
12+academic_integrity: HGSE (Harvard Graduate School of Education) school-level policy: Unless otherwise specified by the instructor, using generative AI to create all or part of an assignment (e.g., paper, memo, presentation, short response) and submitting it as one's own work violates the HGSE Academic Integrity Policy. Permissible uses include seeking clarification on concepts, brainstorming ideas, or generating scenarios that help contextualize learning.
13+Evidence (en, 3e6fca239176): Unless otherwise specified by your instructor, it is a violation of the HGSE Academic Integrity Policy to use generative AI to create all or part of an assignment for a course (e.g., a paper, memo, presentation, or short response) and submit it as your own. Permissible uses of generative AI in HGSE coursework include seeking clarification on concepts, brainstorming ideas, or generating scenarios that help contextualize what you are learning.
14+academic_integrity: HGSE (Harvard Graduate School of Education) school-level policy: For any permitted use of generative AI tools, students must acknowledge and document that use in their assignment submission by explaining what tool(s) were used, prompts provided, and how the output was integrated into the work. Direct citations must use proper citation format.
15+Evidence (en, 3e6fca239176): For any permitted use of GenAI tools, you must acknowledge and document that use in your assignment submission by explaining what tool(s) you used, prompts you provided (if applicable), and how you integrated the output into your work. If you cite directly from the tool, use proper citation format to credit the source.
16+privacy: HGSE (Harvard Graduate School of Education) school-level policy: It is forbidden to make personal recordings of any course meetings, with or without AI tool integrations. Uploading substantial course content is only allowable through the Harvard-approved AI Sandbox.
17+Evidence (en, 3e6fca239176): It is forbidden to make your own recording of any course meetings, with or without AI tool integrations. If you require or would prefer that course meetings be recorded, discuss this request with your instructor. Uploading any substantial course content — including text, video, readings, discussion-board pages, or audio recordings — is only allowable through the Harvard-approved AI Sandbox.
18+privacy: FAS (Faculty of Arts and Sciences) Office of Undergraduate Education guidance: Faculty must get documented permission from students before putting original student content into any generative AI tool. No confidential information can be loaded into generative AI systems since there is no expectation of privacy or confidentiality.
19+Evidence (en, fb0bf75a8ed5): Faculty must get documented permission from students before putting original student content into any generative AI tool, and students should be made aware of the risks of entering their original work into such tools. No confidential information can be loaded into GAI systems, since there is no expectation of privacy or confidentiality.
20+academic_integrity: HMS (Harvard Medical School) Academic and Research Integrity guidance: AI tools cannot be listed as authors on a paper. Authors should be transparent when AI tools are used and provide information about how AI tools were used.
21+Evidence (en, 5a0b3ca35d6c): AI Tools cannot be listed as an author on a paper. Authors should be transparent when AI tools are used and provide information about how AI tools were used.

Claim changes

12 claim records

other

University-wide: Level 2 and above confidential data (including non-public research data, finance, HR, student records, medical information) should not be entered into publicly-available generative AI tools. Such data may only be entered into generative AI tools that have been assessed and approved by Harvard's Information Security and Data Privacy office.

Review: Agent reviewedConfidence95%Evidence1Languagesen

teaching

FAS (Faculty of Arts and Sciences) Office of Undergraduate Education policy: All faculty are required to inform students of the policies governing generative AI use in class. Faculty should post their AI policy on their Canvas site.

Review: Agent reviewedConfidence95%Evidence1Languagesen

procurement

University-wide: All vendor generative AI tools not currently offered by HUIT must be assessed for risk by Harvard's Information Security and Data Privacy office prior to use in Harvard work. Contact HUIT before procuring any generative AI tool.

Review: Agent reviewedConfidence95%Evidence1Languagesen

privacy

University-wide: AI meeting assistants (AI note takers or bots) should not be used in Harvard meetings, with the exception of approved tools with contractual protections including enterprise agreements with appropriate security and privacy protections, or tools as part of limited HUIT-directed pilot programs.

Review: Agent reviewedConfidence95%Evidence1Languagesen

other

University-wide: Users are responsible for any content they publish or share that includes AI-generated material. AI-generated content may be inaccurate, misleading, entirely fabricated (hallucinations), or contain copyrighted material.

Review: Agent reviewedConfidence95%Evidence1Languagesen

academic_integrity

HGSE (Harvard Graduate School of Education) school-level policy: Unless otherwise specified by the instructor, using generative AI to create all or part of an assignment (e.g., paper, memo, presentation, short response) and submitting it as one's own work violates the HGSE Academic Integrity Policy. Permissible uses include seeking clarification on concepts, brainstorming ideas, or generating scenarios that help contextualize learning.

Review: Agent reviewedConfidence95%Evidence1Languagesen

academic_integrity

HGSE (Harvard Graduate School of Education) school-level policy: For any permitted use of generative AI tools, students must acknowledge and document that use in their assignment submission by explaining what tool(s) were used, prompts provided, and how the output was integrated into the work. Direct citations must use proper citation format.

Review: Agent reviewedConfidence95%Evidence1Languagesen

privacy

HGSE (Harvard Graduate School of Education) school-level policy: It is forbidden to make personal recordings of any course meetings, with or without AI tool integrations. Uploading substantial course content is only allowable through the Harvard-approved AI Sandbox.

Review: Agent reviewedConfidence95%Evidence1Languagesen

privacy

FAS (Faculty of Arts and Sciences) Office of Undergraduate Education guidance: Faculty must get documented permission from students before putting original student content into any generative AI tool. No confidential information can be loaded into generative AI systems since there is no expectation of privacy or confidentiality.

Review: Agent reviewedConfidence90%Evidence1Languagesen

academic_integrity

HMS (Harvard Medical School) Academic and Research Integrity guidance: AI tools cannot be listed as authors on a paper. Authors should be transparent when AI tools are used and provide information about how AI tools were used.

Review: Agent reviewedConfidence90%Evidence1Languagesen

other

University-wide: Only Harvard-offered versions of generative AI tools carry stated data classification protections. Publicly-available versions of the same tools should not be used for Harvard work. Approved tools (Harvard AI Sandbox, Google Gemini, Microsoft Copilot Chat, ChatGPT Edu, Adobe Firefly) are approved for Level 3 data and below.

Review: Agent reviewedConfidence90%Evidence1Languagesen

teaching

University-wide: Faculty should be clear with students about their policies on permitted uses of generative AI in classes and on academic work. Students are encouraged to ask instructors for clarification about these policies as needed.

Review: Agent reviewedConfidence85%Evidence1Languagesen

Source snapshots

12 source attributions

AI | Academic and Research Integrity

official_policy_page checked May 6, 2026

Snapshot hash
5a0b3ca35d6c341bf7e2c09bbbe76560699e6854b9379f341ce805aa07ea9002

Generative AI Tool Comparison

official_guidance checked May 6, 2026

Snapshot hash
9003257c800e0ac7fd5b5556b4fe2b08748cb6c70334cf9affa03215ab56f1f9