Change log

University of Chicago

Source-check timeline, source snapshot hashes, claim review state, and a diff-style preview of current source-backed claim evidence.

Change summary

Current public record freshness and review state.

University of Chicago currently has 14 source-backed claim records and 4 official source attributions. Latest tracked changed date: May 6, 2026.

This tracker is not legal advice, not academic integrity advice, and not an official university statement unless a linked source is the university's own official page.

Claim/evidence diff preview

Diff-style preview built from current public claim/evidence records. Full old/new source diffs require paired historical snapshots.

University of Chicago current policy evidence

Inserted lines represent current public claim and evidence records in the source-backed dataset.

+20-0
11 # University of Chicago AI policy record
2+ai_tool_treatment: The University of Chicago maintains a page listing approved, restricted, and unauthorized AI tools, with use conditions and review information for the university community.
3+Evidence (en, 40aaab755c6a): Approved and Restricted AI Tools If a proposed tool or use case isn't listed on this table, the next step depends on whether the tool requires payment for use. For paid tools, individuals should submit a request through the Service Now Procurement Intake. For free tools, if a researcher or staff member intends to use a generative AI tool with sensitive data, or in research contexts, please complete the Generative AI Tool Evaluation Form.
4+privacy: At the University of Chicago, use of confidential data with publicly available generative AI tools is prohibited without prior security and privacy review. This includes personally identifiable employee data, FERPA-covered student data, HIPAA-covered patient data, and research that is not yet publicly available.
5+Evidence (en, 764b5baf5e21): The use of confidential data with publicly available generative AI tools is prohibited without prior security and privacy review. This includes personally identifiable employee data, FERPA-covered student data, HIPAA-covered patient data, and may include research that is not yet publicly available.
6+privacy: At the University of Chicago, generative AI systems, applications, and software products that process, analyze, or move confidential data require a security review before they are acquired, even if the software is free.
7+Evidence (en, 764b5baf5e21): Generative AI systems, applications, and software products that process, analyze, or move confidential data require a security review before they are acquired, even if the software is free.
8+privacy: At the University of Chicago, use of University data by vendors to train or improve their AI models is not permitted.
9+Evidence (en, 764b5baf5e21): Data Use Restrictions: Use of University data by vendors to train or improve their models is not permitted.
10+privacy: At the University of Chicago, confidential, sensitive, or restricted data should not be used with generative AI tools unless the tool and the use have been reviewed and approved through the appropriate University process.
11+Evidence (en, 2c80278c2e99): Confidential data, sensitive data, or restricted data should not be used with generative AI tools unless the tool and the use have been reviewed and approved through the appropriate University process.
12+ai_tool_treatment: At the University of Chicago, even when an AI tool is listed as approved, it is not risk-free. Approval means the tool can be used under specific conditions, but users are still responsible for evaluating the sensitivity of their data and ensuring confidential, regulated, or contract-restricted information is not shared unless explicitly allowed.
13+Evidence (en, 40aaab755c6a): Please note: even when a tool is listed as approved, it is not risk-free. Approval simply means the tool can be used under specific conditions, but not that all data is safe to enter. Users are still responsible for evaluating the sensitivity of their data, understanding vendor limitations, and ensuring that confidential, regulated, or contract-restricted information is not shared with AI tools unless explicitly allowed.
14+ai_tool_treatment: At the University of Chicago, PhoenixAI is the university's internal generative AI platform, approved for general use and can be used for sensitive information with IRB approval.
15+Evidence (en, 40aaab755c6a): PhoenixAI | See PhoenixAI Service Usage Guidelines | General Use | Enterprise-supported | Can be used for sensitive information with IRB approval. | June 30, 2025
16+ai_tool_treatment: At the University of Chicago, ChatGPT 3.5 and ChatGPT 4.0 are approved only for data that is made publicly available by its source, with restrictions limiting use to non-sensitive information.
17+Evidence (en, 40aaab755c6a): ChatGPT 3.5 | Approved for data that is made publicly available by its source. | General Use | Free | Only for non-sensitive information | January 17, 2025 ChatGPT 4.0 | Approved for data that is made publicly available by its source. | General Use | Free | Only for non-sensitive information | January 17, 2025
18+privacy: At the University of Chicago, AI-generated content may be misleading or inaccurate, and it is the responsibility of the tool user to review the accuracy and ownership of any AI-generated content.
19+Evidence (en, 764b5baf5e21): AI-generated content may be misleading or inaccurate. Generative AI technology may create citations to content that does not exist. Responses from generative AI tools may contain content and materials from other authors and may be copyrighted. It is the responsibility of the tool user to review the accuracy and ownership of any AI-generated content.
20+privacy: At the University of Chicago, if a proposed AI tool use exceeds standard risk tolerance but is not prohibited by compliance regulations, a Risk Acceptance Letter may be prepared to document reviewed and accepted risks under specific conditions.
21+Evidence (en, 764b5baf5e21): Getting an exception through a Risk Acceptance Letter (RAL): If the proposed use is not prohibited by compliance regulations but exceeds standard risk tolerance, and the requester still wishes to proceed with the purchase, the request is escalated to senior leadership—the Chief Information Security Officer (CISO) and Chief Technology Officer (CTO), and the Chief Privacy Officer (CPO)—for guidance.

Claim changes

14 claim records

ai_tool_treatment

The University of Chicago maintains a page listing approved, restricted, and unauthorized AI tools, with use conditions and review information for the university community.

Review: Agent reviewedConfidence95%Evidence1Languagesen

privacy

At the University of Chicago, if a proposed AI tool use exceeds standard risk tolerance but is not prohibited by compliance regulations, a Risk Acceptance Letter may be prepared to document reviewed and accepted risks under specific conditions.

Review: Agent reviewedConfidence90%Evidence1Languagesen

privacy

At the University of Chicago, use of confidential data with publicly available generative AI tools is prohibited without prior security and privacy review. This includes personally identifiable employee data, FERPA-covered student data, HIPAA-covered patient data, and research that is not yet publicly available.

Review: Agent reviewedConfidence95%Evidence1Languagesen

privacy

At the University of Chicago, generative AI systems, applications, and software products that process, analyze, or move confidential data require a security review before they are acquired, even if the software is free.

Review: Agent reviewedConfidence95%Evidence1Languagesen

privacy

At the University of Chicago, use of University data by vendors to train or improve their AI models is not permitted.

Review: Agent reviewedConfidence95%Evidence1Languagesen

privacy

At the University of Chicago, confidential, sensitive, or restricted data should not be used with generative AI tools unless the tool and the use have been reviewed and approved through the appropriate University process.

Review: Agent reviewedConfidence95%Evidence1Languagesen

ai_tool_treatment

At the University of Chicago, even when an AI tool is listed as approved, it is not risk-free. Approval means the tool can be used under specific conditions, but users are still responsible for evaluating the sensitivity of their data and ensuring confidential, regulated, or contract-restricted information is not shared unless explicitly allowed.

Review: Agent reviewedConfidence90%Evidence1Languagesen

ai_tool_treatment

At the University of Chicago, PhoenixAI is the university's internal generative AI platform, approved for general use and can be used for sensitive information with IRB approval.

Review: Agent reviewedConfidence90%Evidence1Languagesen

ai_tool_treatment

At the University of Chicago, ChatGPT 3.5 and ChatGPT 4.0 are approved only for data that is made publicly available by its source, with restrictions limiting use to non-sensitive information.

Review: Agent reviewedConfidence90%Evidence1Languagesen

privacy

At the University of Chicago, AI-generated content may be misleading or inaccurate, and it is the responsibility of the tool user to review the accuracy and ownership of any AI-generated content.

Review: Agent reviewedConfidence90%Evidence1Languagesen

ai_tool_treatment

The University of Chicago provides a central hub at genai.uchicago.edu for information on generative AI tools, training, resources, and guidance for the university community.

Review: Agent reviewedConfidence90%Evidence1Languagesen

privacy

At the University of Chicago, AI transcription or assistant tools may not be used to secretly record or join meetings, per the Business Conduct Policy.

Review: Agent reviewedConfidence90%Evidence1Languagesen

privacy

At the University of Chicago, AI tools may not be used to generate harassing, discriminatory, or otherwise unlawful content, including the use of AI to create or alter images, audio, and videos, per the Policy on Harassment, Discrimination, and Sexual Misconduct.

Review: Agent reviewedConfidence90%Evidence1Languagesen

privacy

At the University of Chicago, entering sensitive data into AI tools without review and approval by security, privacy, and the appropriate data steward may create an unauthorized data disclosure that may violate University policy, federal and state law, sponsor or contract obligations, and data use agreements.

Review: Agent reviewedConfidence90%Evidence1Languagesen

Source snapshots

4 source attributions

Generative AI at UChicago

official_guidance checked May 5, 2026

Snapshot hash
70fc1f89d7a4335ed37f5837642b63ddac7801408fe0a7bbf681139400111668