Fort Collins, United States

Colorado State University

Colorado State University is listed as QS 2026 rank =458. Colorado State University has 8 source-backed AI policy claim records from 6 official source attributions. The public record preserves original-language evidence snippets, source URLs, snapshot hashes, confidence, and review state.

Short answer

v1 public contract

Colorado State University is listed as QS 2026 rank =458. Colorado State University has 8 source-backed AI policy claim records from 6 official source attributions. The public record preserves original-language evidence snippets, source URLs, snapshot hashes, confidence, and review state.

Citation-ready summary

As of this public record, University AI Policy Tracker lists Colorado State University as an agent-reviewed AI policy record last checked on May 16, 2026 and last changed on May 16, 2026. The record contains 8 source-backed claims, including 8 reviewed claims, from 6 official source attributions. Original-language evidence snippets and source URLs remain canonical, with public JSON available at https://eduaipolicy.org/api/public/v1/universities/colorado-state-university.json. The entity-level confidence is 92%. This tracker is not legal advice, not academic integrity advice, and not an official university statement unless the linked source is the university's own official page.

Claim coverage8 reviewedSource languageenPublic JSON/api/public/v1/universities/colorado-state-university.json

Policy signals in this record

  • Evidence includes Privacy claims.
  • Evidence includes AI tool treatment claims.
  • Evidence includes Source status claims.
  • Evidence includes Academic integrity claims.
  • Evidence includes Security review claims.
  • Evidence includes Teaching claims.
  • Named AI services detected in public claims: ChatGPT, Microsoft Copilot, Claude, Gemini.
  • Disclosure, acknowledgment, citation, or attribution language appears in the public claim text.
Policy statusReviewed evidence-backed recordReview: Agent reviewedEvidence-backed claims8Reviewed8Candidate0Official sources6

This reference record summarizes visible public data only. Official sources and original-language evidence remain canonical; confidence is separate from review state.

This page is not legal advice, not academic integrity advice, and not an official university statement unless a linked source is the university's own official page.

Policy profile

Deterministic source-backed dimensions derived from this record's public claims.

Coverage score100/100Coverage labelbroad public coverageReview: Machine candidateAnalysis confidence75%

Policy profile rows are machine-candidate derived metadata. They are not final policy conclusions; inspect the linked claim evidence before reuse.

Analysis page-quality metadata is available at /api/public/v1/analysis/page-quality.json.

AI disclosure

Colorado State University has 1 source-backed public claim for ai disclosure; deterministic analysis status: recommended.

RecommendedMachine candidateConfidence77%Evidence1Sources1

Research guidance

Colorado State University has 1 source-backed public claim for research guidance; deterministic analysis status: recommended.

RecommendedMachine candidateConfidence77%Evidence1Sources1

Security and procurement

Colorado State University has 1 source-backed public claim for security and procurement; deterministic analysis status: restricted.

RestrictedMachine candidateConfidence76%Evidence1Sources1

Coverage score measures breadth of public, source-backed coverage only. It is not a policy quality, strictness, legal adequacy, safety, or compliance score.

Evidence-backed claims

8 reviewed evidence-backed public claim

Privacy

CSU's AI Tools page says other AI tools such as ChatGPT, Gemini, or Anthropic Claude may be used for non-sensitive public information only.

Review: Agent reviewedConfidence92%

Normalized value: commercial_ai_public_information_only

Original evidence

Evidence 1
You may use other tools for non-sensitive, public information only. Many commercial AI tools have privacy statements that allow them to collect and store your data, and in many cases, they may use your input to train future models.

Privacy

CSU-GPT is described by CSU as operating inside CSU's Microsoft Azure environment, keeping data, prompts, and uploads inside the university's Microsoft Azure tenant.

Review: Agent reviewedConfidence91%

Normalized value: csu_gpt_azure_tenant_data_handling

Original evidence

Evidence 1
Unlike public AI tools, CSU-GPT keeps all data, prompts, and uploads inside the university's Microsoft Azure tenant. That means your data never leaves CSU.

Ai Tool Treatment

CSU's AI Tools page lists CSU-GPT, Microsoft Copilot Chat with CSU NetID, and Microsoft Teams Premium as currently approved tools for handling sensitive CSU data.

Review: Agent reviewedConfidence90%

Normalized value: approved_sensitive_ai_tools

Original evidence

Evidence 1
Only three tools are currently approved for handling sensitive CSU data (such as research data, student information, or protected university records): CSU-GPT; Microsoft Copilot Chat (When logged in with your CSU NetID); Microsoft Teams Premium.

Source Status

The CSU System AI Governance Guidelines state that they apply to individuals handling institutional data or using AI tools in administrative operations, research, clinical, and educational activities across the CSU system.

Review: Agent reviewedConfidence90%

Normalized value: system_ai_governance_guidelines_scope

Original evidence

Evidence 1
These guidelines apply to all individuals handling institutional data or using AI tools in administrative operations, research, clinical, and educational activities across the CSU system.

Academic Integrity

CSU's Student Resolution Center defines cheating to include unauthorized sources or assistance and instructor-prohibited behavior, and defines plagiarism as representing another's language, structure, images, ideas, or thoughts as one's own without proper acknowledgment.

Review: Agent reviewedConfidence90%

Normalized value: academic_misconduct_cheating_plagiarism_definitions

Original evidence

Evidence 1
Cheating includes using unauthorized sources of information and providing or receiving unauthorized assistance on any form of academic work or engaging in any behavior specifically prohibited by the instructor in the course syllabus or class presentation. Plagiarism includes the copying of language, structure, images, ideas, or thoughts of another, and representing them as one's own without proper acknowledgment.

Security Review

For CSU System internal, confidential, and restricted data classifications, the AI Governance Guidelines say users should verify data classification, follow applicable policies, and use CSU-approved tools; restricted data also requires additional approval for AI use.

Review: Agent reviewedConfidence89%

Normalized value: classified_data_ai_use_requires_approved_tools

Original evidence

Evidence 1
Level 2 (Internal): Verify data classification before use, adhere to approved CSU data and AI policies, use only CSU-approved tools... Level 4 (Restricted): Verify data classification before use... use only CSU-approved tools with the highest level of contractual data protection guarantees; requires additional approval for any AI use.

Teaching

CSU TILT guidance says AI syllabus statements should remain consistent with university and department policy on assessing student work and provide students clarity about instructor expectations.

Review: Agent reviewedConfidence83%

Normalized value: ai_syllabus_statement_clarity_guidance

Original evidence

Evidence 1
However, it is important that they remain consistent with university and department policy on assessing student work. Finally, the most important part of including a statement is that it provides clarity to your students about your expectations.

Academic Integrity

An official CSU MTI teaching page states that work submitted for credit that was created by AI engines can be addressed under multiple areas of the Academic Misconduct section of the Student Conduct Code.

Review: Agent reviewedConfidence78%

Normalized value: mti_ai_created_work_conduct_code_guidance

Original evidence

Evidence 1
Is work (essays, responses, code, images) created by an artificial intelligence engine still covered by our Student Conduct Code's language? Yes. Definitively. The Student Conduct Code was written to address behavior, not technologies. In addition, work submitted for credit that was created by AI-engines can be addressed using multiple areas of the Academic Misconduct section of the Student Conduct Code.

Candidate claims

0 machine or needs-review claim

Candidate claims are not final policy conclusions. They preserve source URL, source snapshot hash, evidence, confidence, and review state so the record can be audited before review.

Official sources

6 source attribution

AI Tools | AI @ CSU | Colorado State University

ai.colostate.edu

Snapshot hash
5bf38fe76129dc4319d3be18bb66c3a15c80035c3aaf3e49bc4639c79ef60702

Change log

Source-check timeline and diff-style claim/evidence preview.

View the public change record for this university, including source snapshot hashes, claim review states, and a diff-style preview of current source-backed evidence.

Last checkedMay 16, 2026Last changedMay 16, 2026Open change log

Corrections and missing evidence

Corrections create review tasks and do not directly change this public record.

If an official source is missing, stale, moved, blocked, or incorrectly summarized, submit a source URL, policy change report, or institution correction for review. Corrections must preserve source URLs, source language, original evidence, review state, and audit history.

Back to universities