11 # University of California, Riverside AI policy record
2+security_review: UCR's Provost guidelines state that generative AI tools that have not passed a campus security review may be used with public data only; for other data classifications, UCR points to secure tools including Google Gemini and Microsoft Copilot.
3+Evidence (en, 85dcf3c98da1): Generative AI tools which have not passed a campus security review may be used with public data only. For all other data classifications, UCR provides access to secure tools including Google Gemini and Microsoft Copilot.
4+teaching: UCR's Provost generative-AI guidelines say instructional uses of generative AI by instructors or students should aim to improve student learning and align with UCR's instructional mission.
5+Evidence (en, 85dcf3c98da1): Any use of generative AI in an instructional setting, by instructors or students, should aim to improve the learning experience for students and better position students for academic and post-graduation success.
6+procurement: UCR ITS lists UCR-supported AI tools by role and allowed data level, including Gemini, NotebookLM, Google AI Studio, Microsoft 365 Copilot, Vertex AI, Zoom AI Companion, The Grove, and a ChatGPT EDU offering that the page says is not currently available.
7+Evidence (en, fe06ed36a4ee): The AI tools comparison chart lists Tool/Platform, Description, Roles Allowed, Getting Access, Training Resources, Cost, and Allowed Data for tools including The Grove, Gemini, NotebookLM, Google AI Pro, Google AI Studio, Microsoft 365 Copilot, ChatGPT EDU, Vertex AI, and Zoom AI Companion.
8+academic_integrity: UCR's student-facing AI announcement says students should discuss generative-AI expectations with professors, use AI to assist or enhance rather than replace original work, avoid generating entire deliverables, and cite AI-generated content or data when used.
9+Evidence (en, 18952bd9a1a9): We encourage you to discuss with your professors for specific policies or expectations before engaging in the use of Generative AI resources on academic assignments, papers, tests, etc.
10+teaching: UCR XCITE advises instructors to discuss when AI may be used in coursework and how it should be cited, and its sample syllabus language treats uncited AI use as a potential academic-integrity issue.
11+Evidence (en, a1accf6824fe): Key points to discuss in your syllabus/ in class: If and when AI may be used to write a portion of homework or any other assignment; How to properly cite the use of any AI.
12+source_status: UCR's public generative-AI guidance for instructional settings places course-level use decisions with the Instructor of Record rather than setting one universal student-use rule in the evidence reviewed here.
13+Evidence (en, 85dcf3c98da1): In instructional settings, this means the Instructor of Record has broad latitude to determine whether and how generative AI may be used, provided this use is consistent with applicable policies and rules governing data security and instruction at UCR.
14+academic_integrity: UCR's general academic-integrity page defines academic misconduct to include using prohibited or inappropriate materials, plagiarism without appropriate credit, and unauthorized collaboration without instructor permission.
15+Evidence (en, bf79cfe61bc2): Cheating: Fraud, deceit, or dishonesty in an academic assignment, or using or attempting to use materials, or assisting others in using materials that are prohibited or inappropriate in the context of the academic assignment or capstone in question.