teaching
HU Berlin’s recommendations allow examiners to use AI to create exam materials, but say examiners remain responsible for suitability and that sole use of AI to grade exams or study work is prohibited.
Open, evidence-backed AI policy records for public reuse.
Change log
Source-check timeline, source snapshot hashes, claim review state, and a diff-style preview of current source-backed claim evidence.
Current public record freshness and review state.
Humboldt-Universität zu Berlin currently has 8 source-backed claim records and 4 official source attributions. Latest tracked changed date: May 14, 2026.
This tracker is not legal advice, not academic integrity advice, and not an official university statement unless a linked source is the university's own official page.
Diff-style preview built from current public claim/evidence records. Full old/new source diffs require paired historical snapshots.
Inserted lines represent current public claim and evidence records in the source-backed dataset.
8 claim records
HU Berlin’s recommendations allow examiners to use AI to create exam materials, but say examiners remain responsible for suitability and that sole use of AI to grade exams or study work is prohibited.
For HU employees, the central AI guide says confidential data and non-public research data must not be entered into publicly accessible generative-AI tools, and personal data must be removed or anonymized unless using HU internal CMS AI offerings configured for data protection.
HU Berlin’s exam and study-work AI recommendations are explicitly non-binding; they say AI use is generally allowed, while faculties and examination boards may make subject-specific binding decisions that restrict or prohibit AI use for particular assessments.
HU Berlin says it promotes AI use in research, teaching, and administration, and frames its central generative-AI guide as a living guide that is to be expanded into an AI policy.
HU Berlin’s recommendations advise documenting student AI use so examiners can account for the tool use during assessment, while leaving the concrete documentation requirement to the relevant subject and examination context.
HU Berlin advises that generative-AI tools should be checked for data-protection and IT-security risks before use, and that AI-tool procurement should consider a preliminary review, especially where personal data may be processed.
HU Berlin’s public AI hub describes university-provided data-protection-compliant AI tools based on large language models for research, teaching, and administration, plus HPC@HU for research and a JupyterHub for digital teaching and learning.
HU Berlin’s good-scientific-practice AI page is an official source index for AI in teaching and research, linking the central AI tools, AI guide, and the AI-in-assessments recommendations.
4 source attributions
official_guidance checked May 14, 2026
official_guidance checked May 14, 2026
official_guidance checked May 14, 2026
official_policy_page checked May 14, 2026