11 # University of North Carolina at Chapel Hill AI policy record
2+security_review: UNC-Chapel Hill's administrative generative AI guidance says sensitive information should not be entered into generative AI tools unless the Information Security Office has completed a risk assessment and the Data Governance Oversight Group has approved the tool for sensitive information.
3+Evidence (en-US, 64b070af83d4): Do not enter sensitive information (as defined by the UNC-Chapel Hill Information Classification Standard) into generative AI tools unless the University’s Information Security Office (ISO) has conducted a risk assessment of the generative AI tool and the University’s Data Governance Oversight Group (DGOG) has approved the tool to handle sensitive information.
4+teaching: UNC-Chapel Hill's faculty grading and assessment guidance says students must be able to request a full instructor-led review if they disagree with an AI-generated grade or have concerns about automated feedback.
5+Evidence (en-US, 9c6c27e19c1c): Students must be able to request a full instructor-led review if they disagree with an AI- generated grade or have concerns about automated feedback, without additional scrutiny, justification, or penalty.
6+privacy: UNC-Chapel Hill's research guidance treats entering private or confidential information, research data, grant proposals, or analytical results into public generative AI tools as a public disclosure of that information.
7+Evidence (en-US, 1d0574d903f2): Uploading information (e.g., research data, grant proposals, unpublished manuscripts, or analytical results) to a public AI tool is equivalent to releasing it publicly; thus, before any information from you or another individual is uploaded to a public AI tool, appropriate steps must be taken to ensure that the disclosure of that information is consistent with all rules and laws related to the handling of private information.
8+teaching: UNC-Chapel Hill's faculty grading and assessment guidance says AI systems used for grading or feedback must be institutionally approved and compliant with data security and privacy standards; faculty using GenAI for grading retain full responsibility for evaluative decisions and feedback.
9+Evidence (en-US, 9c6c27e19c1c): Faculty must ensure that any AI system used for grading or feedback is institutionally approved and compliant with data security and privacy standards. Uploading student work to consumer-grade AI platforms not contracted by the university is inconsistent with student privacy laws and university policy.
10+research: UNC-Chapel Hill's research generative AI guidance applies to members of the research community involved in research under the auspices of the University, including faculty, staff, students, guest researchers, collaborators, and consultants.
11+Evidence (en-US, 1d0574d903f2): This guidance applies to all members of the research community, including faculty, staff (SHRA and EHRA non-faculty), students (undergraduate, graduate and professional), guest researchers (e.g., unpaid volunteers, interns, and visiting scholars), collaborators, and consultants involved in research occurring under the auspices of the University.
12+teaching: UNC-Chapel Hill faculty guidance encourages instructors to state course and assignment AI expectations in the syllabus and tells Carolina students to follow the specific AI guidelines in that syllabus.
13+Evidence (en-US, 2e373948ed87): Conveying your stance on students’ use of AI in your course is important; it clarifies your expectations and ensures that any use of AI supports rather than frustrates your learning objectives.