11 # University of California, Berkeley (UCB) AI policy record
2+other: UC Berkeley warns that individuals who accept click-through agreements for AI tools (such as OpenAI and ChatGPT terms of use) without delegated signature authority may face personal liability, including responsibility for compliance with terms and conditions.
3+Evidence (en, 53fb3a36f07d): Certain generative AI tools use click-through agreements. Click-through agreements, including OpenAI and ChatGPT terms of use, are contracts. Individuals who accept click-through agreements without delegated signature authority may face personal consequences, including responsibility for compliance with terms and conditions.
4+other: UC Berkeley requires researchers to comply with varying license agreement terms before using or training AI tools with materials acquired from library-licensed resources or databases. Violations can result in personal liability and campus-wide loss of access to critical research resources.
5+Evidence (en, 53fb3a36f07d): Before using or training AI tools with materials acquired from Library-licensed resources or databases, researchers must comply with varying license agreement terms. Violations can result in personal liability and campus-wide loss of access to critical research resources.
6+other: UC Berkeley states that use of generative AI tools should be consistent with UC Berkeley's Principles of Community and the UC Principles of Responsible AI.
7+Evidence (en, 53fb3a36f07d): In all cases, use should be consistent with UC Berkeley's Principles of Community and the UC Principles of Responsible AI.
8+other: The UC Berkeley Academic Senate recommends that all faculty include a clear statement on their syllabus about course expectations regarding the use of Google Gemini or any other generative AI tool for course-related work. In the absence of such a statement, students may be more likely to use these technologies inappropriately.
9+Evidence (en, c6786c351541): We recommend that all faculty include a clear statement on their syllabus about course expectations regarding the use of Google Gemini or any other GenAI tool for course-related work. In the absence of such a statement, students may be more likely to use these technologies inappropriately or fail to utilize them effectively as a learning tool.
10+other: The UC Berkeley Academic Senate states that generative AI detection tools are increasingly less accurate and that there are no validated generative AI detection tools available.
11+Evidence (en, c6786c351541): GenAI detection tools are increasingly less accurate; there are no validated GenAI detection tools.
12+other: The UC Berkeley Academic Senate provides three sample syllabus statement frameworks for faculty: 'Full AI' (GenAI required), 'Some AI' (limited permitted use with restrictions), and 'No AI' (all GenAI use prohibited). Faculty should modify these to fit their course requirements.
13+Evidence (en, c6786c351541): We provide three sample statements. Instructors should modify them to fit their course requirements. The three statements include the two extremes, with the most and least GenAI use. We also include a third option that is approximately in the middle between the two.
14+other: The UC Berkeley Academic Senate recommends that for assignments where GenAI is not permitted, instructors should adopt enforcement mechanisms such as in-person proctored exams, an additional oral exam component, or a written statement of academic integrity, since no validated GenAI detection tools exist.
15+Evidence (en, c6786c351541): GenAI detection tools are increasingly less accurate; there are no validated GenAI detection tools. Therefore, assignments or learning activities where GenAI is not permitted should consider adopting one or more of the following solutions: Written Statement of Academic Integrity; In-person proctored exams/activities; An additional interview component (or oral exam) to an assignment where students are graded on an explanation of their work.
16+other: The UC Berkeley Academic Senate's 'Some AI' syllabus framework requires students to include an acknowledgement of their use of any generative AI system in submitted work, along with the prompts used and how the output was utilized.
17+Evidence (en, c6786c351541): When assignments in the course permit or incorporate the use of GenAI tools, the assignment will ask you to include an acknowledgement of your use of any type of GenAI in your submitted work and share the prompts and outputs utilized at the time of submission. The suggested format is as follows: I acknowledge the use of [insert AI system(s) and link] to [specific use of GenAI]. The prompts used include [list of prompts]. The output from these prompts was used to [explain the use].
18+other: At UC Berkeley, publicly-available information classified as Protection Level P1 may be freely used in generative AI tools.
19+Evidence (en, 53fb3a36f07d): Publicly-available information (Protection Level P1) can be used in generative AI tools.
20+other: The UC Berkeley Academic Senate advises that for assignments where instructors encourage or require GenAI tools, instructors must ensure students have access to the necessary computing resources. If non-campus-sanctioned resources are required, the instructor is responsible for providing access.
21+Evidence (en, c6786c351541): For any assignments where the instructor encourages or requires the use of GenAI tools, instructors should ensure that students have access to the necessary computing resources to run those GenAI tools. If non-campus-sanctioned resources are required, it is the instructor's responsibility to provide access to those resources.