Chicago, United States

University of Chicago

University of Chicago is listed as QS 2026 rank 13. University of Chicago has 14 source-backed AI policy claim records from 4 official source attributions. The public record preserves original-language evidence snippets, source URLs, snapshot hashes, confidence, and review state.

Citation-ready overview

v1 public contract

University of Chicago is listed as QS 2026 rank 13. University of Chicago has 14 source-backed AI policy claim records from 4 official source attributions. The public record preserves original-language evidence snippets, source URLs, snapshot hashes, confidence, and review state.

Reviewed claims14Candidate claims0Official sources4

Candidate claims are source-backed records pending review. They are not final policy conclusions and are not legal or academic integrity advice.

Reviewed claims

14 reviewed public claim

Ai Tool Treatment

The University of Chicago maintains a page listing approved, restricted, and unauthorized AI tools, with use conditions and review information for the university community.

Review: Agent reviewedConfidence95%

Original evidence

Evidence 1
Approved and Restricted AI Tools If a proposed tool or use case isn't listed on this table, the next step depends on whether the tool requires payment for use. For paid tools, individuals should submit a request through the Service Now Procurement Intake. For free tools, if a researcher or staff member intends to use a generative AI tool with sensitive data, or in research contexts, please complete the Generative AI Tool Evaluation Form.

Privacy

At the University of Chicago, use of confidential data with publicly available generative AI tools is prohibited without prior security and privacy review. This includes personally identifiable employee data, FERPA-covered student data, HIPAA-covered patient data, and research that is not yet publicly available.

Review: Agent reviewedConfidence95%

Original evidence

Evidence 1
The use of confidential data with publicly available generative AI tools is prohibited without prior security and privacy review. This includes personally identifiable employee data, FERPA-covered student data, HIPAA-covered patient data, and may include research that is not yet publicly available.

Privacy

At the University of Chicago, generative AI systems, applications, and software products that process, analyze, or move confidential data require a security review before they are acquired, even if the software is free.

Review: Agent reviewedConfidence95%

Original evidence

Evidence 1
Generative AI systems, applications, and software products that process, analyze, or move confidential data require a security review before they are acquired, even if the software is free.

Privacy

At the University of Chicago, use of University data by vendors to train or improve their AI models is not permitted.

Review: Agent reviewedConfidence95%

Original evidence

Evidence 1
Data Use Restrictions: Use of University data by vendors to train or improve their models is not permitted.

Privacy

At the University of Chicago, confidential, sensitive, or restricted data should not be used with generative AI tools unless the tool and the use have been reviewed and approved through the appropriate University process.

Review: Agent reviewedConfidence95%

Original evidence

Evidence 1
Confidential data, sensitive data, or restricted data should not be used with generative AI tools unless the tool and the use have been reviewed and approved through the appropriate University process.

Ai Tool Treatment

At the University of Chicago, even when an AI tool is listed as approved, it is not risk-free. Approval means the tool can be used under specific conditions, but users are still responsible for evaluating the sensitivity of their data and ensuring confidential, regulated, or contract-restricted information is not shared unless explicitly allowed.

Review: Agent reviewedConfidence90%

Original evidence

Evidence 1
Please note: even when a tool is listed as approved, it is not risk-free. Approval simply means the tool can be used under specific conditions, but not that all data is safe to enter. Users are still responsible for evaluating the sensitivity of their data, understanding vendor limitations, and ensuring that confidential, regulated, or contract-restricted information is not shared with AI tools unless explicitly allowed.

Ai Tool Treatment

At the University of Chicago, PhoenixAI is the university's internal generative AI platform, approved for general use and can be used for sensitive information with IRB approval.

Review: Agent reviewedConfidence90%

Original evidence

Evidence 1
PhoenixAI | See PhoenixAI Service Usage Guidelines | General Use | Enterprise-supported | Can be used for sensitive information with IRB approval. | June 30, 2025

Ai Tool Treatment

At the University of Chicago, ChatGPT 3.5 and ChatGPT 4.0 are approved only for data that is made publicly available by its source, with restrictions limiting use to non-sensitive information.

Review: Agent reviewedConfidence90%

Original evidence

Evidence 1
ChatGPT 3.5 | Approved for data that is made publicly available by its source. | General Use | Free | Only for non-sensitive information | January 17, 2025 ChatGPT 4.0 | Approved for data that is made publicly available by its source. | General Use | Free | Only for non-sensitive information | January 17, 2025

Privacy

At the University of Chicago, AI-generated content may be misleading or inaccurate, and it is the responsibility of the tool user to review the accuracy and ownership of any AI-generated content.

Review: Agent reviewedConfidence90%

Original evidence

Evidence 1
AI-generated content may be misleading or inaccurate. Generative AI technology may create citations to content that does not exist. Responses from generative AI tools may contain content and materials from other authors and may be copyrighted. It is the responsibility of the tool user to review the accuracy and ownership of any AI-generated content.

Privacy

At the University of Chicago, if a proposed AI tool use exceeds standard risk tolerance but is not prohibited by compliance regulations, a Risk Acceptance Letter may be prepared to document reviewed and accepted risks under specific conditions.

Review: Agent reviewedConfidence90%

Original evidence

Evidence 1
Getting an exception through a Risk Acceptance Letter (RAL): If the proposed use is not prohibited by compliance regulations but exceeds standard risk tolerance, and the requester still wishes to proceed with the purchase, the request is escalated to senior leadership—the Chief Information Security Officer (CISO) and Chief Technology Officer (CTO), and the Chief Privacy Officer (CPO)—for guidance.

Ai Tool Treatment

The University of Chicago provides a central hub at genai.uchicago.edu for information on generative AI tools, training, resources, and guidance for the university community.

Review: Agent reviewedConfidence90%

Original evidence

Evidence 1
Generative artificial intelligence (AI) is a quickly evolving technology that offers capabilities to enhance teaching and learning, research, and administrative work at the University of Chicago. We want to provide the UChicago community with information on the latest tools integrating generative AI, as well as training, resources, and guidance.

Privacy

At the University of Chicago, AI transcription or assistant tools may not be used to secretly record or join meetings, per the Business Conduct Policy.

Review: Agent reviewedConfidence90%

Original evidence

Evidence 1
Business Conduct Policy: Staff must act with honesty and integrity, safeguard confidential information, and prevent unauthorized disclosures. Prohibits recording conversations without consent. AI transcription/assistant tools may not be used to secretly record or join meetings.

Privacy

At the University of Chicago, AI tools may not be used to generate harassing, discriminatory, or otherwise unlawful content, including the use of AI to create or alter images, audio, and videos, per the Policy on Harassment, Discrimination, and Sexual Misconduct.

Review: Agent reviewedConfidence90%

Original evidence

Evidence 1
Policy on Harassment, Discrimination, and Sexual Misconduct: Prohibits unlawful harassment, discrimination, and sexual misconduct. AI tools may not be used to generate harassing, discriminatory, or otherwise unlawful content. This includes the use of AI to create or alter images, audio, and videos.

Privacy

At the University of Chicago, entering sensitive data into AI tools without review and approval by security, privacy, and the appropriate data steward may create an unauthorized data disclosure that may violate University policy, federal and state law, sponsor or contract obligations, and data use agreements.

Review: Agent reviewedConfidence90%

Original evidence

Evidence 1
Entering sensitive data into AI tools without review and approval by security, privacy, and the appropriate data steward may create an unauthorized data disclosure. Such disclosures may violate University policy, federal and state law, sponsor or contract obligations, and data use agreements.

Candidate claims

0 machine or needs-review claim

Candidate claims are not final policy conclusions. They preserve source URL, source snapshot hash, evidence, confidence, and review state so the record can be audited before review.

Official sources

4 source attribution

Generative AI at UChicago

genai.uchicago.edu

Snapshot hash
70fc1f89d7a4335ed37f5837642b63ddac7801408fe0a7bbf681139400111668
Back to universities