Change log

University of Colorado Boulder

Source-check timeline, source snapshot hashes, claim review state, and a diff-style preview of current source-backed claim evidence.

Change summary

Current public record freshness and review state.

University of Colorado Boulder currently has 3 source-backed claim records and 3 official source attributions. Latest tracked changed date: May 16, 2026.

This tracker is not legal advice, not academic integrity advice, and not an official university statement unless a linked source is the university's own official page.

Claim/evidence diff preview

Diff-style preview built from current public claim/evidence records. Full old/new source diffs require paired historical snapshots.

University of Colorado Boulder current policy evidence

Inserted lines represent current public claim and evidence records in the source-backed dataset.

+6-0
11 # University of Colorado Boulder AI policy record
2+security_review: CU Boulder OIT states that AI tools and integrations are evaluated through the Information Technology Accessibility and Security Review Process and that tools or services not completing campus review breach campus guidelines.
3+Evidence (en, 7df157eaf672): CU Boulder requires that accessibility and security compliance provisions be included in all contracts or user agreements. AI tools and integrations are evaluated by CU Boulder’s Information Technology Accessibility and Security Review Process, also known as the ICT Review Process. The ICT Review Process applies to purchases and adoptions of all information technology regardless of the cost or funding source. The use of tools and services that have not completed the campus review is considered a breach of campus guidelines.
4+teaching: CU Boulder Center for Teaching & Learning identifies generative AI as raising teaching questions about assessment design, classroom AI-use expectations, AI literacy, and ethical awareness.
5+Evidence (en, 39ba7d5c3892): Generative AI has introduced new considerations and challenges for educators. It raises important questions about how to design valid assessments of student learning, how to communicate expectations around appropriate AI use in the classroom, and how to support students in developing AI literacy and ethical awareness.
6+academic_integrity: CU Boulder Center for Teaching & Learning provides instructor-facing guidance on how generative AI complicates academic integrity and how instructors can address misconduct while supporting learning, fairness, and trust.
7+Evidence (en, 7fb9697d2efc): The proliferation of generative AI technologies has blurred the boundaries between original, assisted and unauthorized work, complicating how academic integrity is maintained in higher education. This guide synthesizes research and institutional practices to help instructors understand how AI may be reshaping student approaches to academic work and integrity, why students engage in misconduct, and how instructors can address misconduct in a way that supports learning, fairness and trust.

Claim changes

3 claim records

security_review

CU Boulder OIT states that AI tools and integrations are evaluated through the Information Technology Accessibility and Security Review Process and that tools or services not completing campus review breach campus guidelines.

Review: Agent reviewedConfidence95%Evidence1Languagesen

academic_integrity

CU Boulder Center for Teaching & Learning provides instructor-facing guidance on how generative AI complicates academic integrity and how instructors can address misconduct while supporting learning, fairness, and trust.

Review: Agent reviewedConfidence89%Evidence1Languagesen

teaching

CU Boulder Center for Teaching & Learning identifies generative AI as raising teaching questions about assessment design, classroom AI-use expectations, AI literacy, and ethical awareness.

Review: Agent reviewedConfidence90%Evidence1Languagesen

Source snapshots

3 source attributions