Change log

RMIT University

Source-check timeline, source snapshot hashes, claim review state, and a diff-style preview of current source-backed claim evidence.

Change summary

Current public record freshness and review state.

RMIT University currently has 9 source-backed claim records and 5 official source attributions. Latest tracked changed date: May 14, 2026.

This tracker is not legal advice, not academic integrity advice, and not an official university statement unless a linked source is the university's own official page.

Claim/evidence diff preview

Diff-style preview built from current public claim/evidence records. Full old/new source diffs require paired historical snapshots.

RMIT University current policy evidence

Inserted lines represent current public claim and evidence records in the source-backed dataset.

+18-0
11 # RMIT University AI policy record
2+source_status: RMIT has a Responsible Artificial Intelligence procedure that establishes ethical principles and a risk-based framework for safe and responsible AI adoption across RMIT Group functions and activities.
3+Evidence (en, b0cf955cfe22): This procedure establishes the ethical principles and risk-based framework which underpin the safe and responsible adoption of AI to support the functions and activities of the RMIT Group.
4+academic_integrity: RMIT academic integrity guidance says students cannot use AI to complete or contribute to an assessment task unless specifically allowed, pass off unreferenced AI-produced ideas as their own, or submit AI-produced content they cannot understand or explain.
5+Evidence (en, ce8d72d68bd8): You cannot use AI to: Complete or contribute to an assessment task when it has not been specifically allowed. Check your course guide or ask your educator if you're not sure; Produce ideas that you don't reference and try to pass off as your own; Produce content that you are unable to understand or explain in your own words.
6+security_review: Before RMIT initiates its AI Governance Framework for an AI initiative, existing governance assessments include Privacy Impact Assessment, Security Risk Assessment, and Third-Party Risk Assessment.
7+Evidence (en, b0cf955cfe22): Existing risks assessments, indicated below, must be completed before the AI Governance Framework is initiated. These assessments will also determine if the initiative is introducing an AI component(s): Privacy Impact Assessment(PIA); Security Risk Assessment (SRA); Third-Party Risk Assessment (TPRA).
8+academic_integrity: RMIT assessment guidance tells students to review and fact-check AI outputs and acknowledge AI tools when those tools contribute ideas, text, images, or other content to assessment work.
9+Evidence (en, a50403d10d5b): You should always review and fact check AI outputs and acknowledge the tool just as you would any content, ideas or media that you use in your assessments. Just like when we reference books, articles or websites in our assessments, if an AI tool has contributed ideas, text, images or other content to your work, you need to acknowledge it.
10+academic_integrity: RMIT says whether students may use AI in assessments depends on their course and Course Coordinator instructions, and students should check with the Course Coordinator if unsure.
11+Evidence (en, a50403d10d5b): Can I use AI for my assessments? This depends on your course and instructions from your Course Coordinator. If you're unsure, always check with your Course Coordinator.
12+ai_tool_treatment: RMIT describes Val as its generative AI tool for eligible students, powered by OpenAI models, and says course guides determine whether Val or other AI tools are appropriate for learning and assessment.
13+Evidence (en, 11f2aaaf8a77): Val is RMIT's generative artificial intelligence tool - powered by OpenAI's GPT-4.1, GPT-4o and o3-mini models. Val is private, secure and free for RMIT students to use. Your course guides will provide guidance on how you can use Val or other AI tools in your learning, including whether or not it's appropriate to use AI tools in your assessments.
14+privacy: RMIT's Val guidance says data shared with Val is kept private, secure and confidential, is not shared with OpenAI or other external organisations outside RMIT, and should not include personal, sensitive, or health information.
15+Evidence (en, 11f2aaaf8a77): Any data you share with Val is also kept private, secure and confidential - it's not shared with OpenAI, or any other external organisations outside RMIT. Do not enter prompts that contain: Information that you would not usually share with other people; Personal, sensitive and health information about yourself or anyone else.
16+ai_tool_treatment: RMIT student AI learning guidance says RMIT has approved tools with AI capability and that students should be cautious when using public AI tools outside that approved set.
17+Evidence (en, b75885613b82): RMIT has a list of approved tools with AI capability and students should be cautious with using any public tools outside of that.
18+privacy: RMIT tells students to verify AI outputs and treat public AI tools as privacy, security, and reputation risks because inputs could be shared.
19+Evidence (en, b75885613b82): Always verify the information you receive from AI and make sure you can understand it and explain it in your own words. Treat public AI tools like a megaphone - anything you input could be shared and be a risk to your privacy, security and reputation.

Claim changes

9 claim records

source_status

RMIT has a Responsible Artificial Intelligence procedure that establishes ethical principles and a risk-based framework for safe and responsible AI adoption across RMIT Group functions and activities.

Review: Agent reviewedConfidence95%Evidence1Languagesen

academic_integrity

RMIT academic integrity guidance says students cannot use AI to complete or contribute to an assessment task unless specifically allowed, pass off unreferenced AI-produced ideas as their own, or submit AI-produced content they cannot understand or explain.

Review: Agent reviewedConfidence95%Evidence1Languagesen

security_review

Before RMIT initiates its AI Governance Framework for an AI initiative, existing governance assessments include Privacy Impact Assessment, Security Risk Assessment, and Third-Party Risk Assessment.

Review: Agent reviewedConfidence94%Evidence1Languagesen

academic_integrity

RMIT assessment guidance tells students to review and fact-check AI outputs and acknowledge AI tools when those tools contribute ideas, text, images, or other content to assessment work.

Review: Agent reviewedConfidence94%Evidence1Languagesen

academic_integrity

RMIT says whether students may use AI in assessments depends on their course and Course Coordinator instructions, and students should check with the Course Coordinator if unsure.

Review: Agent reviewedConfidence94%Evidence1Languagesen

ai_tool_treatment

RMIT describes Val as its generative AI tool for eligible students, powered by OpenAI models, and says course guides determine whether Val or other AI tools are appropriate for learning and assessment.

Review: Agent reviewedConfidence94%Evidence1Languagesen

privacy

RMIT's Val guidance says data shared with Val is kept private, secure and confidential, is not shared with OpenAI or other external organisations outside RMIT, and should not include personal, sensitive, or health information.

Review: Agent reviewedConfidence94%Evidence1Languagesen

ai_tool_treatment

RMIT student AI learning guidance says RMIT has approved tools with AI capability and that students should be cautious when using public AI tools outside that approved set.

Review: Agent reviewedConfidence92%Evidence1Languagesen

privacy

RMIT tells students to verify AI outputs and treat public AI tools as privacy, security, and reputation risks because inputs could be shared.

Review: Agent reviewedConfidence91%Evidence1Languagesen

Source snapshots

5 source attributions