source_status
ANU approved six institutional AI principles via Academic Board in June 2023, covering excellence/integrity, research engagement, clear guidance, AI literacy, access/privacy/security, and collaborative policy development.
Open, evidence-backed AI policy records for public reuse.
Change log
Source-check timeline, source snapshot hashes, claim review state, and a diff-style preview of current source-backed claim evidence.
Current public record freshness and review state.
Australian National University (ANU) currently has 29 source-backed claim records and 12 official source attributions. Latest tracked changed date: May 10, 2026.
This tracker is not legal advice, not academic integrity advice, and not an official university statement unless a linked source is the university's own official page.
Diff-style preview built from current public claim/evidence records. Full old/new source diffs require paired historical snapshots.
Inserted lines represent current public claim and evidence records in the source-backed dataset.
29 claim records
ANU approved six institutional AI principles via Academic Board in June 2023, covering excellence/integrity, research engagement, clear guidance, AI literacy, access/privacy/security, and collaborative policy development.
Submitting AI-generated content as one's own work constitutes a breach of ANU's academic integrity rules.
ANU academic staff are not permitted to upload student data or academic work to generative AI platforms.
ANU prohibits using AI to collect, use, store, or disclose personal information without express consent from the individual(s).
ANU requires that only university-approved AI solutions/software be used to ensure appropriate data governance, information security, and licensing.
Students retain IP ownership of their assignments at ANU; staff may not upload student work to AI platforms without express student consent.
At ANU, using AI-generated content when not permitted and claiming authorship without acknowledgment constitutes a breach of academic integrity.
ANU prohibits uploading student work to AI platforms without consent, including for feedback or marking purposes, citing privacy and data security reasons.
ANU Law School prohibits using generative AI to draft assessment content; all submitted work must be the student's own independent and original work.
ANU Law School requires students to explicitly declare AI tool usage in the first footnote of submitted work, including tool names, purpose, and extent of use.
ANU endorses Copilot Enterprise as the primary AI tool for staff and students, accessed via ANU accounts; non-endorsed tools carry security risks the university cannot guarantee.
All AI technical solutions used for ANU business or on ANU-managed devices must be approved by the university; unapproved freeware is considered a network security risk.
ANU permits course conveners to explicitly limit or encourage generative AI use; students must check class summaries and assessment outlines for AI requirements.
ANU treats generative AI as a permissible learning tool that can be cited as an information source, but强调 it is not a replacement for student thinking and originality.
ANU provides Copilot Enterprise and Adobe Firefly as enterprise-licensed AI tools with data protection for staff and students using ANU accounts.
ANU handles suspected generative AI misuse through the same academic misconduct procedure as other integrity breaches, including giving students the opportunity to respond.
ANU Law School permits limited AI use for improving expression in student drafts (grammar, clarity, structure) and brainstorming ideas, provided all information is independently verified.
ANU Law School warns that academic integrity findings related to AI misuse may have long-term consequences for law students, as misconduct must be disclosed when applying for admission to legal practice.
ANU does not ban generative AI, but the College of Asia & the Pacific distinguishes between appropriate and inappropriate uses based on whether AI replaces or supports student skill development.
ANU CAP guidelines identify using AI-produced text as one's own, using AI to generate assignment structures, and using AI to rephrase others' work to avoid plagiarism detection as inappropriate uses that constitute academic integrity issues.
ANU allows individual colleges and disciplines to set their own policies on whether AI is permitted for specific assessment tasks, rather than imposing a university-wide blanket rule.
ANU guidance acknowledges that traditional assessments like generic essays and multiple-choice tests are more vulnerable to AI misuse, and recommends redesigning tasks with authentic, specific contexts.
The ANU Library LibGuide references the ARC policy requiring disclosure of generative AI use in grant applications, accuracy verification, and originality compliance.
ANU was developing governance document changes to require students to acknowledge any use of artificial intelligence in their work (as of early 2023).
ANU CAP guidelines state there is a very strong presumption against any use of generative AI or translation programs in language courses, and advise non-language students to check with convenors before using AI for translation.
The ANU Library LibGuide catalogs major publisher policies on AI: ACM prohibits AI authorship but permits disclosed use; Nature, Science, Elsevier, IEEE, and others require disclosure of AI use in manuscripts.
ANU maintains a 'Generative AI and Assessment' resource collection providing step-by-step advice on designing assessments in the age of AI, covering assessment planning, evaluation, and approach determination.
ANU published a PDF FAQ document 'Chat GPT and other generative AI tools: What ANU academics need to know' covering ChatGPT introduction, assessment impact, and institutional response (content also referenced in ANU's blog post).
ANU maintains an 'AI Essentials' resource collection for using supported AI tools while discussing best practice with students.
12 source attributions
official_guidance checked May 9, 2026
official_guidance checked May 9, 2026
official_guidance checked May 9, 2026
official_policy_page checked May 9, 2026
official_pdf checked May 9, 2026
official_guidance checked May 9, 2026
official_guidance checked May 9, 2026
official_guidance checked May 9, 2026
official_guidance checked May 9, 2026
official_guidance checked May 9, 2026
official_guidance checked May 9, 2026
official_guidance checked May 9, 2026