Cambridge, United States

Harvard University

Harvard University is listed as QS 2026 rank 5. Harvard University has 12 source-backed AI policy claim records from 12 official source attributions. The public record preserves original-language evidence snippets, source URLs, snapshot hashes, confidence, and review state.

Citation-ready overview

v1 public contract

Harvard University is listed as QS 2026 rank 5. Harvard University has 12 source-backed AI policy claim records from 12 official source attributions. The public record preserves original-language evidence snippets, source URLs, snapshot hashes, confidence, and review state.

Reviewed claims12Candidate claims0Official sources12

Candidate claims are source-backed records pending review. They are not final policy conclusions and are not legal or academic integrity advice.

Reviewed claims

12 reviewed public claim

Other

University-wide: Level 2 and above confidential data (including non-public research data, finance, HR, student records, medical information) should not be entered into publicly-available generative AI tools. Such data may only be entered into generative AI tools that have been assessed and approved by Harvard's Information Security and Data Privacy office.

Review: Agent reviewedConfidence95%

Original evidence

Evidence 1
You should not enter data classified as confidential (Level 2 and above, including non-public research data, finance, HR, student records, medical information, etc.) into publicly-available generative AI tools, in accordance with the University's Information Security Policy. Information shared with generative AI tools using default settings is not private and could expose proprietary or sensitive information to unauthorized parties. Level 2 and above confidential data must only be entered into generative AI tools that have been assessed and approved for such use by Harvard's Information Security and Data Privacy office.

Teaching

FAS (Faculty of Arts and Sciences) Office of Undergraduate Education policy: All faculty are required to inform students of the policies governing generative AI use in class. Faculty should post their AI policy on their Canvas site.

Review: Agent reviewedConfidence95%

Original evidence

Evidence 1
All faculty are required to inform students of the policies governing generative AI use in class. ... Once you decide on a policy, make sure you articulate it clearly for your students, so that they know what is expected of them. More specifically, you should post your policy on your Canvas site.

Procurement

University-wide: All vendor generative AI tools not currently offered by HUIT must be assessed for risk by Harvard's Information Security and Data Privacy office prior to use in Harvard work. Contact HUIT before procuring any generative AI tool.

Review: Agent reviewedConfidence95%

Original evidence

Evidence 1
If you are considering procuring a generative AI tool not currently offered or have questions, please contact HUIT. All vendor generative AI tools must be assessed for risk by Harvard's Information Security and Data Privacy office prior to use in Harvard work.

Privacy

University-wide: AI meeting assistants (AI note takers or bots) should not be used in Harvard meetings, with the exception of approved tools with contractual protections including enterprise agreements with appropriate security and privacy protections, or tools as part of limited HUIT-directed pilot programs.

Review: Agent reviewedConfidence95%

Original evidence

Evidence 1
AI meeting assistants should not be used in Harvard meetings, with the exception of approved tools with contractual protections: Use only AI assistants for which Harvard has an enterprise agreement with the vendor including appropriate security and privacy protections, including: Approved tools as part of limited HUIT-directed pilot programs to evaluate the use of AI assistants within the Harvard environment.

Other

University-wide: Users are responsible for any content they publish or share that includes AI-generated material. AI-generated content may be inaccurate, misleading, entirely fabricated (hallucinations), or contain copyrighted material.

Review: Agent reviewedConfidence95%

Original evidence

Evidence 1
AI-generated content can be inaccurate, misleading, or entirely fabricated (sometimes called "hallucinations") or may contain copyrighted material. You are responsible for any content that you publish or share that includes AI-generated material.

Academic Integrity

HGSE (Harvard Graduate School of Education) school-level policy: Unless otherwise specified by the instructor, using generative AI to create all or part of an assignment (e.g., paper, memo, presentation, short response) and submitting it as one's own work violates the HGSE Academic Integrity Policy. Permissible uses include seeking clarification on concepts, brainstorming ideas, or generating scenarios that help contextualize learning.

Review: Agent reviewedConfidence95%

Original evidence

Evidence 1
Unless otherwise specified by your instructor, it is a violation of the HGSE Academic Integrity Policy to use generative AI to create all or part of an assignment for a course (e.g., a paper, memo, presentation, or short response) and submit it as your own. Permissible uses of generative AI in HGSE coursework include seeking clarification on concepts, brainstorming ideas, or generating scenarios that help contextualize what you are learning.

Academic Integrity

HGSE (Harvard Graduate School of Education) school-level policy: For any permitted use of generative AI tools, students must acknowledge and document that use in their assignment submission by explaining what tool(s) were used, prompts provided, and how the output was integrated into the work. Direct citations must use proper citation format.

Review: Agent reviewedConfidence95%

Original evidence

Evidence 1
For any permitted use of GenAI tools, you must acknowledge and document that use in your assignment submission by explaining what tool(s) you used, prompts you provided (if applicable), and how you integrated the output into your work. If you cite directly from the tool, use proper citation format to credit the source.

Privacy

HGSE (Harvard Graduate School of Education) school-level policy: It is forbidden to make personal recordings of any course meetings, with or without AI tool integrations. Uploading substantial course content is only allowable through the Harvard-approved AI Sandbox.

Review: Agent reviewedConfidence95%

Original evidence

Evidence 1
It is forbidden to make your own recording of any course meetings, with or without AI tool integrations. If you require or would prefer that course meetings be recorded, discuss this request with your instructor. Uploading any substantial course content — including text, video, readings, discussion-board pages, or audio recordings — is only allowable through the Harvard-approved AI Sandbox.

Privacy

FAS (Faculty of Arts and Sciences) Office of Undergraduate Education guidance: Faculty must get documented permission from students before putting original student content into any generative AI tool. No confidential information can be loaded into generative AI systems since there is no expectation of privacy or confidentiality.

Review: Agent reviewedConfidence90%

Original evidence

Evidence 1
Faculty must get documented permission from students before putting original student content into any generative AI tool, and students should be made aware of the risks of entering their original work into such tools. No confidential information can be loaded into GAI systems, since there is no expectation of privacy or confidentiality.

Academic Integrity

HMS (Harvard Medical School) Academic and Research Integrity guidance: AI tools cannot be listed as authors on a paper. Authors should be transparent when AI tools are used and provide information about how AI tools were used.

Review: Agent reviewedConfidence90%

Original evidence

Evidence 1
AI Tools cannot be listed as an author on a paper. Authors should be transparent when AI tools are used and provide information about how AI tools were used.

Other

University-wide: Only Harvard-offered versions of generative AI tools carry stated data classification protections. Publicly-available versions of the same tools should not be used for Harvard work. Approved tools (Harvard AI Sandbox, Google Gemini, Microsoft Copilot Chat, ChatGPT Edu, Adobe Firefly) are approved for Level 3 data and below.

Review: Agent reviewedConfidence90%

Original evidence

Evidence 1
Note: all data classification levels listed below apply only to the Harvard-offered versions of these tools and not to publicly-available versions of these tools (which should not be used for Harvard work). Harvard AI Sandbox - ... Level 3 data and below. Google Gemini ... Level 3 data and below. Microsoft Copilot Chat ... Level 3 data and below. OpenAI ChatGPT Edu ... Level 3 data and below. Adobe Firefly ... Level 3 data and below.

Teaching

University-wide: Faculty should be clear with students about their policies on permitted uses of generative AI in classes and on academic work. Students are encouraged to ask instructors for clarification about these policies as needed.

Review: Agent reviewedConfidence85%

Original evidence

Evidence 1
Faculty should be clear with students they're teaching and advising about their policies on permitted uses, if any, of generative AI in classes and on academic work. Students are also encouraged to ask their instructors for clarification about these policies as needed.

Candidate claims

0 machine or needs-review claim

Candidate claims are not final policy conclusions. They preserve source URL, source snapshot hash, evidence, confidence, and review state so the record can be audited before review.

Official sources

12 source attribution

AI | Academic and Research Integrity

ari.hms.harvard.edu

Snapshot hash
5a0b3ca35d6c341bf7e2c09bbbe76560699e6854b9379f341ce805aa07ea9002
Back to universities