Berkeley, United States

University of California, Berkeley (UCB)

University of California, Berkeley (UCB) is listed as QS 2026 rank =17. University of California, Berkeley (UCB) has 22 source-backed AI policy claim records from 5 official source attributions. The public record preserves original-language evidence snippets, source URLs, snapshot hashes, confidence, and review state.

Citation-ready overview

v1 public contract

University of California, Berkeley (UCB) is listed as QS 2026 rank =17. University of California, Berkeley (UCB) has 22 source-backed AI policy claim records from 5 official source attributions. The public record preserves original-language evidence snippets, source URLs, snapshot hashes, confidence, and review state.

Reviewed claims22Candidate claims0Official sources5

Candidate claims are source-backed records pending review. They are not final policy conclusions and are not legal or academic integrity advice.

Reviewed claims

22 reviewed public claim

Other

UC Berkeley warns that individuals who accept click-through agreements for AI tools (such as OpenAI and ChatGPT terms of use) without delegated signature authority may face personal liability, including responsibility for compliance with terms and conditions.

Review: Agent reviewedConfidence95%

Original evidence

Evidence 1
Certain generative AI tools use click-through agreements. Click-through agreements, including OpenAI and ChatGPT terms of use, are contracts. Individuals who accept click-through agreements without delegated signature authority may face personal consequences, including responsibility for compliance with terms and conditions.

Other

UC Berkeley requires researchers to comply with varying license agreement terms before using or training AI tools with materials acquired from library-licensed resources or databases. Violations can result in personal liability and campus-wide loss of access to critical research resources.

Review: Agent reviewedConfidence95%

Original evidence

Evidence 1
Before using or training AI tools with materials acquired from Library-licensed resources or databases, researchers must comply with varying license agreement terms. Violations can result in personal liability and campus-wide loss of access to critical research resources.

Other

UC Berkeley states that use of generative AI tools should be consistent with UC Berkeley's Principles of Community and the UC Principles of Responsible AI.

Review: Agent reviewedConfidence95%

Original evidence

Evidence 1
In all cases, use should be consistent with UC Berkeley's Principles of Community and the UC Principles of Responsible AI.

Other

The UC Berkeley Academic Senate recommends that all faculty include a clear statement on their syllabus about course expectations regarding the use of Google Gemini or any other generative AI tool for course-related work. In the absence of such a statement, students may be more likely to use these technologies inappropriately.

Review: Agent reviewedConfidence95%

Original evidence

Evidence 1
We recommend that all faculty include a clear statement on their syllabus about course expectations regarding the use of Google Gemini or any other GenAI tool for course-related work. In the absence of such a statement, students may be more likely to use these technologies inappropriately or fail to utilize them effectively as a learning tool.

Other

The UC Berkeley Academic Senate states that generative AI detection tools are increasingly less accurate and that there are no validated generative AI detection tools available.

Review: Agent reviewedConfidence95%

Original evidence

Evidence 1
GenAI detection tools are increasingly less accurate; there are no validated GenAI detection tools.

Other

The UC Berkeley Academic Senate provides three sample syllabus statement frameworks for faculty: 'Full AI' (GenAI required), 'Some AI' (limited permitted use with restrictions), and 'No AI' (all GenAI use prohibited). Faculty should modify these to fit their course requirements.

Review: Agent reviewedConfidence95%

Original evidence

Evidence 1
We provide three sample statements. Instructors should modify them to fit their course requirements. The three statements include the two extremes, with the most and least GenAI use. We also include a third option that is approximately in the middle between the two.

Other

The UC Berkeley Academic Senate recommends that for assignments where GenAI is not permitted, instructors should adopt enforcement mechanisms such as in-person proctored exams, an additional oral exam component, or a written statement of academic integrity, since no validated GenAI detection tools exist.

Review: Agent reviewedConfidence95%

Original evidence

Evidence 1
GenAI detection tools are increasingly less accurate; there are no validated GenAI detection tools. Therefore, assignments or learning activities where GenAI is not permitted should consider adopting one or more of the following solutions: Written Statement of Academic Integrity; In-person proctored exams/activities; An additional interview component (or oral exam) to an assignment where students are graded on an explanation of their work.

Other

The UC Berkeley Academic Senate's 'Some AI' syllabus framework requires students to include an acknowledgement of their use of any generative AI system in submitted work, along with the prompts used and how the output was utilized.

Review: Agent reviewedConfidence95%

Original evidence

Evidence 1
When assignments in the course permit or incorporate the use of GenAI tools, the assignment will ask you to include an acknowledgement of your use of any type of GenAI in your submitted work and share the prompts and outputs utilized at the time of submission. The suggested format is as follows: I acknowledge the use of [insert AI system(s) and link] to [specific use of GenAI]. The prompts used include [list of prompts]. The output from these prompts was used to [explain the use].

Other

At UC Berkeley, publicly-available information classified as Protection Level P1 may be freely used in generative AI tools.

Review: Agent reviewedConfidence95%

Original evidence

Evidence 1
Publicly-available information (Protection Level P1) can be used in generative AI tools.

Other

The UC Berkeley Academic Senate advises that for assignments where instructors encourage or require GenAI tools, instructors must ensure students have access to the necessary computing resources. If non-campus-sanctioned resources are required, the instructor is responsible for providing access.

Review: Agent reviewedConfidence95%

Original evidence

Evidence 1
For any assignments where the instructor encourages or requires the use of GenAI tools, instructors should ensure that students have access to the necessary computing resources to run those GenAI tools. If non-campus-sanctioned resources are required, it is the instructor's responsibility to provide access to those resources.

Other

UC Berkeley requires users to use UC-licensed AI tools rather than individual consumer accounts to benefit from UC's contractual data protections when working with information more sensitive than Protection Level P1.

Review: Agent reviewedConfidence95%

Original evidence

Evidence 1
UC has license agreements for certain AI tools, which provide protection for use with more sensitive information. It is important to be sure you are using licensed tools, rather than individual consumer accounts, to benefit from UC's contractual protections.

Other

At UC Berkeley, AI tools procured by individual units must adhere to the approved Protection Level limitations advised by that unit, and units should clearly advise staff and users of the appropriate use and Protection Level limitations of their AI tools.

Review: Agent reviewedConfidence95%

Original evidence

Evidence 1
Use of AI tools procured by units separately from the campus or systemwide agreements mentioned above must also adhere to the approved Protection Level limitations advised by the unit to ensure compliance with the agreement and appropriate protections relative to the safety features of the tool. Units offering such tools should clearly advise staff and users as to the appropriate use and Protection Level limitations of the AI (and all) tools that they offer.

Other

UC Berkeley prohibits the use of generative AI tools to complete academic work in a manner not allowed by the instructor.

Review: Agent reviewedConfidence95%

Original evidence

Evidence 1
Completion of academic work in a manner not allowed by the instructor.

Other

UC Berkeley prohibits entering personal, confidential, proprietary, or otherwise sensitive information classified as Protection Level P2, P3, or P4 into generative AI tools, unless specifically allowed under UC's negotiated contracts with AI providers.

Review: Agent reviewedConfidence95%

Original evidence

Evidence 1
Unless specifically stated in the 'Allowable Use' section above, no personal, confidential, proprietary, or otherwise sensitive information may be entered into or generated as output from models or prompts. Such information includes: Student records subject to FERPA, Non-public instructional materials, Proprietary or unpublished research, Any other information classified as Protection Level P2, P3, or P4 (unless specifically allowed under UC's contracts).

Other

UC Berkeley prohibits entering FERPA-protected student records, non-public instructional materials, and proprietary or unpublished research into generative AI tools.

Review: Agent reviewedConfidence95%

Original evidence

Evidence 1
Such information includes: Student records subject to FERPA; Non-public instructional materials; Proprietary or unpublished research.

Other

UC Berkeley requires that any new use of generative AI in studies or work must receive approval from the instructor or responsible unit head, and users should complete the AI Essentials Training and consult the CERC-AIR committee.

Review: Agent reviewedConfidence95%

Original evidence

Evidence 1
If you are considering a new use of Generative AI in your studies or work, it is your responsibility to consider the ethics and risks involved and obtain approval from your instructor/responsible unit head. Be sure to take the AI Essentials Training and consult CERC-AIR, a committee that assesses and offers guidance for mitigating AI risks.

Other

UC Berkeley warns that AI use involving highly-consequential automated decision-making requires extreme caution and should not be employed without prior consultation with appropriate campus entities including the responsible unit head. Examples include legal analysis, recruitment/personnel decisions, replacing represented employees, facial recognition security tools, and grading or assessment of student work.

Review: Agent reviewedConfidence95%

Original evidence

Evidence 1
Use of AI that involves highly-consequential automated decision-making requires extreme caution, and should not be employed without prior consultation with appropriate campus entities, including the responsible Unit head, as such use could put the University and individuals as significant risk. Examples include, but are not limited to: Legal analysis or advice; Recruitment, personnel, or disciplinary decision-making; Seeking to replace work currently done by represented employees; Security tools using facial recognition; Grading or assessment of student work.

Other

UC Berkeley's Office of Ethics, Risk and Compliance provides centralized resources and guidance on the ethical and appropriate use of artificial intelligence, specifically generative AI, with a focus on privacy and compliance with existing laws and policies.

Review: Agent reviewedConfidence90%

Original evidence

Evidence 1
The Office of Ethics, Risk and Compliance is providing resources and guidance about how to use artificial intelligence (AI), specifically generative AI, in an ethical and appropriate manner. This page will help guide you to available tools with a focus on privacy and compliance with existing laws and policies.

Other

UC Berkeley states that units offering AI tools separately from campus or systemwide agreements should clearly advise staff and users of the appropriate use and Protection Level limitations of those tools.

Review: Agent reviewedConfidence90%

Original evidence

Evidence 1
Units offering such tools should clearly advise staff and users as to the appropriate use and Protection Level limitations of the AI (and all) tools that they offer. Units are encouraged to use and refer people to the campus Data Classification Standard and Guidelines for assistance with determining the Protection Level of data being contemplated for use in unit-provided AI tools.

Other

UC Berkeley offers an 'AI Essentials' training for employees — a approximately 30-minute course covering foundational AI concepts, UC policies regarding AI tool usage, and opportunities for application in higher education.

Review: Agent reviewedConfidence90%

Original evidence

Evidence 1
UC Berkeley AI Essentials — ~30-minute employee training covering foundational AI concepts, UC policies regarding the usage of AI tools, and opportunities for application in higher ed. Authentication required.

Other

The University of California system has established Responsible AI Principles comprising eight principles: Appropriateness; Transparency; Accuracy, Reliability and Safety; Fairness and Non-Discrimination; Privacy and Security; Human Values; Shared Benefit and Prosperity; and Accountability.

Review: Agent reviewedConfidence85%

Original evidence

Evidence 1
UC has several AI principles: Appropriateness; Transparency; Accuracy, Reliability and Safety; Fairness and Non-Discrimination; Privacy and Security; Human Values; Shared Benefit and Prosperity; and Accountability.

Other

UC Berkeley has AI risk assessment pre-screening questions that employees can use to gauge the level of risk involved for an AI use case where AI is integrated into a product, service, or feature at the university. Depending on the risk level determined, the CERC-AIR subcommittee may be engaged for a broader risk assessment.

Review: Agent reviewedConfidence85%

Original evidence

Evidence 1
UC Berkeley also has AI risk assessment pre-screening questions that can be used by employees to gauge the level of risk involved for an AI use case (whereby AI is integrated into a product, service or feature at the university). Depending on the level of risk determined, a subcommittee may be engaged and the broader risk assessment conducted.

Candidate claims

0 machine or needs-review claim

Candidate claims are not final policy conclusions. They preserve source URL, source snapshot hash, evidence, confidence, and review state so the record can be audited before review.

Official sources

5 source attribution

Back to universities