procurement
IS&T recommends that MIT community members consult with IS&T before purchasing or using generative AI tools, and recommends using tools already licensed by IS&T for the MIT community.
Open, evidence-backed AI policy records for public reuse.
Change log
Source-check timeline, source snapshot hashes, claim review state, and a diff-style preview of current source-backed claim evidence.
Current public record freshness and review state.
Massachusetts Institute of Technology (MIT) currently has 8 source-backed claim records and 4 official source attributions. Latest tracked changed date: May 6, 2026.
This tracker is not legal advice, not academic integrity advice, and not an official university statement unless a linked source is the university's own official page.
Diff-style preview built from current public claim/evidence records. Full old/new source diffs require paired historical snapshots.
Inserted lines represent current public claim and evidence records in the source-backed dataset.
8 claim records
IS&T recommends that MIT community members consult with IS&T before purchasing or using generative AI tools, and recommends using tools already licensed by IS&T for the MIT community.
MIT advises community members to disclose the use of generative AI tools for all academic, educational, and research-related uses, and not to publish research results relying on AI-generated content without disclosing the nature of such use.
No generative AI tools, including those licensed by IS&T, are approved for use with High Risk MIT information. Additionally, MIT does not recommend using publicly available GenAI tools not subject to an Institute licensing agreement for MIT research and educational activities, even with Low Risk or Medium Risk information.
Use of generative AI tools at MIT must comply with all applicable federal and state laws and orders (including FERPA, HIPAA, Massachusetts Data Protection Standards, export control laws, and the Executive Order on Safe, Secure, and Trustworthy Development and Use of AI), Institute policies (including 10.1 Academic and Research Misconduct, 11.0 Privacy and Disclosure of Personal Information, and 13.0 Information Policies), Information Protection guidelines, and the Institute's Written Information Security Program (WISP), plus any additional policies established by the user's department, lab, center, or institute (DLCI).
MIT holds users responsible for the accuracy of any information they publish, including AI-generated content. Users must be aware that AI-generated information may be inaccurate, incomplete, misleading, biased, fabricated, or contain third-party intellectual property.
MIT maintains a list of approved generative AI tools licensed by IS&T for use by the MIT community. Only these tools are approved for use with low- and medium-risk information, and any tool not on the list requires contacting ai-guidance@mit.edu for assessment before use or purchase. No generative AI tools are approved for use with High Risk MIT information.
MIT prohibits the use of generative AI for purposes that may require in-depth risk assessments without prior consultation with ai-guidance@mit.edu. Such purposes include recruitment and hiring of employees, evaluating student academic performance, making investment decisions, and complaint and dispute resolution.
MIT departments, labs, centers, and institutes (DLCIs) already using a generative AI tool or service must ensure that the tool complies with all Institute policies and Information Protection guidelines, and must contact ai-guidance@mit.edu for consultation or assessment if needed.
4 source attributions
official_guidance checked May 6, 2026
official_guidance checked May 6, 2026
official_guidance checked May 6, 2026
official_guidance checked May 6, 2026