Change log

University of California, Berkeley (UCB)

Source-check timeline, source snapshot hashes, claim review state, and a diff-style preview of current source-backed claim evidence.

Change summary

Current public record freshness and review state.

University of California, Berkeley (UCB) currently has 22 source-backed claim records and 5 official source attributions. Latest tracked changed date: May 6, 2026.

This tracker is not legal advice, not academic integrity advice, and not an official university statement unless a linked source is the university's own official page.

Claim/evidence diff preview

Diff-style preview built from current public claim/evidence records. Full old/new source diffs require paired historical snapshots.

University of California, Berkeley (UCB) current policy evidence

Inserted lines represent current public claim and evidence records in the source-backed dataset.

+20-0
11 # University of California, Berkeley (UCB) AI policy record
2+other: UC Berkeley warns that individuals who accept click-through agreements for AI tools (such as OpenAI and ChatGPT terms of use) without delegated signature authority may face personal liability, including responsibility for compliance with terms and conditions.
3+Evidence (en, 53fb3a36f07d): Certain generative AI tools use click-through agreements. Click-through agreements, including OpenAI and ChatGPT terms of use, are contracts. Individuals who accept click-through agreements without delegated signature authority may face personal consequences, including responsibility for compliance with terms and conditions.
4+other: UC Berkeley requires researchers to comply with varying license agreement terms before using or training AI tools with materials acquired from library-licensed resources or databases. Violations can result in personal liability and campus-wide loss of access to critical research resources.
5+Evidence (en, 53fb3a36f07d): Before using or training AI tools with materials acquired from Library-licensed resources or databases, researchers must comply with varying license agreement terms. Violations can result in personal liability and campus-wide loss of access to critical research resources.
6+other: UC Berkeley states that use of generative AI tools should be consistent with UC Berkeley's Principles of Community and the UC Principles of Responsible AI.
7+Evidence (en, 53fb3a36f07d): In all cases, use should be consistent with UC Berkeley's Principles of Community and the UC Principles of Responsible AI.
8+other: The UC Berkeley Academic Senate recommends that all faculty include a clear statement on their syllabus about course expectations regarding the use of Google Gemini or any other generative AI tool for course-related work. In the absence of such a statement, students may be more likely to use these technologies inappropriately.
9+Evidence (en, c6786c351541): We recommend that all faculty include a clear statement on their syllabus about course expectations regarding the use of Google Gemini or any other GenAI tool for course-related work. In the absence of such a statement, students may be more likely to use these technologies inappropriately or fail to utilize them effectively as a learning tool.
10+other: The UC Berkeley Academic Senate states that generative AI detection tools are increasingly less accurate and that there are no validated generative AI detection tools available.
11+Evidence (en, c6786c351541): GenAI detection tools are increasingly less accurate; there are no validated GenAI detection tools.
12+other: The UC Berkeley Academic Senate provides three sample syllabus statement frameworks for faculty: 'Full AI' (GenAI required), 'Some AI' (limited permitted use with restrictions), and 'No AI' (all GenAI use prohibited). Faculty should modify these to fit their course requirements.
13+Evidence (en, c6786c351541): We provide three sample statements. Instructors should modify them to fit their course requirements. The three statements include the two extremes, with the most and least GenAI use. We also include a third option that is approximately in the middle between the two.
14+other: The UC Berkeley Academic Senate recommends that for assignments where GenAI is not permitted, instructors should adopt enforcement mechanisms such as in-person proctored exams, an additional oral exam component, or a written statement of academic integrity, since no validated GenAI detection tools exist.
15+Evidence (en, c6786c351541): GenAI detection tools are increasingly less accurate; there are no validated GenAI detection tools. Therefore, assignments or learning activities where GenAI is not permitted should consider adopting one or more of the following solutions: Written Statement of Academic Integrity; In-person proctored exams/activities; An additional interview component (or oral exam) to an assignment where students are graded on an explanation of their work.
16+other: The UC Berkeley Academic Senate's 'Some AI' syllabus framework requires students to include an acknowledgement of their use of any generative AI system in submitted work, along with the prompts used and how the output was utilized.
17+Evidence (en, c6786c351541): When assignments in the course permit or incorporate the use of GenAI tools, the assignment will ask you to include an acknowledgement of your use of any type of GenAI in your submitted work and share the prompts and outputs utilized at the time of submission. The suggested format is as follows: I acknowledge the use of [insert AI system(s) and link] to [specific use of GenAI]. The prompts used include [list of prompts]. The output from these prompts was used to [explain the use].
18+other: At UC Berkeley, publicly-available information classified as Protection Level P1 may be freely used in generative AI tools.
19+Evidence (en, 53fb3a36f07d): Publicly-available information (Protection Level P1) can be used in generative AI tools.
20+other: The UC Berkeley Academic Senate advises that for assignments where instructors encourage or require GenAI tools, instructors must ensure students have access to the necessary computing resources. If non-campus-sanctioned resources are required, the instructor is responsible for providing access.
21+Evidence (en, c6786c351541): For any assignments where the instructor encourages or requires the use of GenAI tools, instructors should ensure that students have access to the necessary computing resources to run those GenAI tools. If non-campus-sanctioned resources are required, it is the instructor's responsibility to provide access to those resources.

Claim changes

22 claim records

other

UC Berkeley warns that individuals who accept click-through agreements for AI tools (such as OpenAI and ChatGPT terms of use) without delegated signature authority may face personal liability, including responsibility for compliance with terms and conditions.

Review: Agent reviewedConfidence95%Evidence1Languagesen

other

UC Berkeley requires researchers to comply with varying license agreement terms before using or training AI tools with materials acquired from library-licensed resources or databases. Violations can result in personal liability and campus-wide loss of access to critical research resources.

Review: Agent reviewedConfidence95%Evidence1Languagesen

other

UC Berkeley states that use of generative AI tools should be consistent with UC Berkeley's Principles of Community and the UC Principles of Responsible AI.

Review: Agent reviewedConfidence95%Evidence1Languagesen

other

The UC Berkeley Academic Senate recommends that all faculty include a clear statement on their syllabus about course expectations regarding the use of Google Gemini or any other generative AI tool for course-related work. In the absence of such a statement, students may be more likely to use these technologies inappropriately.

Review: Agent reviewedConfidence95%Evidence1Languagesen

other

The UC Berkeley Academic Senate states that generative AI detection tools are increasingly less accurate and that there are no validated generative AI detection tools available.

Review: Agent reviewedConfidence95%Evidence1Languagesen

other

The UC Berkeley Academic Senate provides three sample syllabus statement frameworks for faculty: 'Full AI' (GenAI required), 'Some AI' (limited permitted use with restrictions), and 'No AI' (all GenAI use prohibited). Faculty should modify these to fit their course requirements.

Review: Agent reviewedConfidence95%Evidence1Languagesen

other

The UC Berkeley Academic Senate recommends that for assignments where GenAI is not permitted, instructors should adopt enforcement mechanisms such as in-person proctored exams, an additional oral exam component, or a written statement of academic integrity, since no validated GenAI detection tools exist.

Review: Agent reviewedConfidence95%Evidence1Languagesen

other

The UC Berkeley Academic Senate's 'Some AI' syllabus framework requires students to include an acknowledgement of their use of any generative AI system in submitted work, along with the prompts used and how the output was utilized.

Review: Agent reviewedConfidence95%Evidence1Languagesen

other

At UC Berkeley, publicly-available information classified as Protection Level P1 may be freely used in generative AI tools.

Review: Agent reviewedConfidence95%Evidence1Languagesen

other

The UC Berkeley Academic Senate advises that for assignments where instructors encourage or require GenAI tools, instructors must ensure students have access to the necessary computing resources. If non-campus-sanctioned resources are required, the instructor is responsible for providing access.

Review: Agent reviewedConfidence95%Evidence1Languagesen

other

UC Berkeley requires users to use UC-licensed AI tools rather than individual consumer accounts to benefit from UC's contractual data protections when working with information more sensitive than Protection Level P1.

Review: Agent reviewedConfidence95%Evidence1Languagesen

other

At UC Berkeley, AI tools procured by individual units must adhere to the approved Protection Level limitations advised by that unit, and units should clearly advise staff and users of the appropriate use and Protection Level limitations of their AI tools.

Review: Agent reviewedConfidence95%Evidence1Languagesen

other

UC Berkeley prohibits the use of generative AI tools to complete academic work in a manner not allowed by the instructor.

Review: Agent reviewedConfidence95%Evidence1Languagesen

other

UC Berkeley prohibits entering personal, confidential, proprietary, or otherwise sensitive information classified as Protection Level P2, P3, or P4 into generative AI tools, unless specifically allowed under UC's negotiated contracts with AI providers.

Review: Agent reviewedConfidence95%Evidence1Languagesen

other

UC Berkeley prohibits entering FERPA-protected student records, non-public instructional materials, and proprietary or unpublished research into generative AI tools.

Review: Agent reviewedConfidence95%Evidence1Languagesen

other

UC Berkeley requires that any new use of generative AI in studies or work must receive approval from the instructor or responsible unit head, and users should complete the AI Essentials Training and consult the CERC-AIR committee.

Review: Agent reviewedConfidence95%Evidence1Languagesen

other

UC Berkeley warns that AI use involving highly-consequential automated decision-making requires extreme caution and should not be employed without prior consultation with appropriate campus entities including the responsible unit head. Examples include legal analysis, recruitment/personnel decisions, replacing represented employees, facial recognition security tools, and grading or assessment of student work.

Review: Agent reviewedConfidence95%Evidence1Languagesen

other

UC Berkeley's Office of Ethics, Risk and Compliance provides centralized resources and guidance on the ethical and appropriate use of artificial intelligence, specifically generative AI, with a focus on privacy and compliance with existing laws and policies.

Review: Agent reviewedConfidence90%Evidence1Languagesen

other

UC Berkeley states that units offering AI tools separately from campus or systemwide agreements should clearly advise staff and users of the appropriate use and Protection Level limitations of those tools.

Review: Agent reviewedConfidence90%Evidence1Languagesen

other

UC Berkeley offers an 'AI Essentials' training for employees — a approximately 30-minute course covering foundational AI concepts, UC policies regarding AI tool usage, and opportunities for application in higher education.

Review: Agent reviewedConfidence90%Evidence1Languagesen

other

The University of California system has established Responsible AI Principles comprising eight principles: Appropriateness; Transparency; Accuracy, Reliability and Safety; Fairness and Non-Discrimination; Privacy and Security; Human Values; Shared Benefit and Prosperity; and Accountability.

Review: Agent reviewedConfidence85%Evidence1Languagesen

other

UC Berkeley has AI risk assessment pre-screening questions that employees can use to gauge the level of risk involved for an AI use case where AI is integrated into a product, service, or feature at the university. Depending on the risk level determined, the CERC-AIR subcommittee may be engaged for a broader risk assessment.

Review: Agent reviewedConfidence85%Evidence1Languagesen

Source snapshots

5 source attributions

Guidance on the use of AI | Berkeley AI Hub

official_guidance checked May 6, 2026

Snapshot hash
2bf26ddd14aad49a54c68a5cc18bca60e80b2299496a508d8d24a1846dbeaa37