source_status
RMIT has a Responsible Artificial Intelligence procedure that establishes ethical principles and a risk-based framework for safe and responsible AI adoption across RMIT Group functions and activities.
Open, evidence-backed AI policy records for public reuse.
Change log
Source-check timeline, source snapshot hashes, claim review state, and a diff-style preview of current source-backed claim evidence.
Current public record freshness and review state.
RMIT University currently has 9 source-backed claim records and 5 official source attributions. Latest tracked changed date: May 14, 2026.
This tracker is not legal advice, not academic integrity advice, and not an official university statement unless a linked source is the university's own official page.
Diff-style preview built from current public claim/evidence records. Full old/new source diffs require paired historical snapshots.
Inserted lines represent current public claim and evidence records in the source-backed dataset.
9 claim records
RMIT has a Responsible Artificial Intelligence procedure that establishes ethical principles and a risk-based framework for safe and responsible AI adoption across RMIT Group functions and activities.
RMIT academic integrity guidance says students cannot use AI to complete or contribute to an assessment task unless specifically allowed, pass off unreferenced AI-produced ideas as their own, or submit AI-produced content they cannot understand or explain.
Before RMIT initiates its AI Governance Framework for an AI initiative, existing governance assessments include Privacy Impact Assessment, Security Risk Assessment, and Third-Party Risk Assessment.
RMIT assessment guidance tells students to review and fact-check AI outputs and acknowledge AI tools when those tools contribute ideas, text, images, or other content to assessment work.
RMIT says whether students may use AI in assessments depends on their course and Course Coordinator instructions, and students should check with the Course Coordinator if unsure.
RMIT describes Val as its generative AI tool for eligible students, powered by OpenAI models, and says course guides determine whether Val or other AI tools are appropriate for learning and assessment.
RMIT's Val guidance says data shared with Val is kept private, secure and confidential, is not shared with OpenAI or other external organisations outside RMIT, and should not include personal, sensitive, or health information.
RMIT student AI learning guidance says RMIT has approved tools with AI capability and that students should be cautious when using public AI tools outside that approved set.
RMIT tells students to verify AI outputs and treat public AI tools as privacy, security, and reputation risks because inputs could be shared.
5 source attributions
official_guidance checked May 14, 2026
official_guidance checked May 14, 2026
official_policy_page checked May 14, 2026
official_guidance checked May 14, 2026
official_guidance checked May 14, 2026