Policy presence
RMIT University has 5 source-backed public claims for policy presence; deterministic analysis status: unclear.
Open, evidence-backed AI policy records for public reuse.
Melbourne, Australia
RMIT University is listed as QS 2026 rank 125. RMIT University has 9 source-backed AI policy claim records from 5 official source attributions. The public record preserves original-language evidence snippets, source URLs, snapshot hashes, confidence, and review state.
v1 public contract
RMIT University is listed as QS 2026 rank 125. RMIT University has 9 source-backed AI policy claim records from 5 official source attributions. The public record preserves original-language evidence snippets, source URLs, snapshot hashes, confidence, and review state.
As of this public record, University AI Policy Tracker lists RMIT University as an agent-reviewed AI policy record last checked on May 14, 2026 and last changed on May 14, 2026. The record contains 9 source-backed claims, including 9 reviewed claims, from 5 official source attributions. Original-language evidence snippets and source URLs remain canonical, with public JSON available at https://eduaipolicy.org/api/public/v1/universities/rmit-university.json. The entity-level confidence is 95%. This tracker is not legal advice, not academic integrity advice, and not an official university statement unless the linked source is the university's own official page.
This reference record summarizes visible public data only. Official sources and original-language evidence remain canonical; confidence is separate from review state.
This page is not legal advice, not academic integrity advice, and not an official university statement unless a linked source is the university's own official page.
Deterministic source-backed dimensions derived from this record's public claims.
Policy profile rows are machine-candidate derived metadata. They are not final policy conclusions; inspect the linked claim evidence before reuse.
Analysis page-quality metadata is available at /api/public/v1/analysis/page-quality.json.
RMIT University has 5 source-backed public claims for policy presence; deterministic analysis status: unclear.
RMIT University has 1 source-backed public claim for ai disclosure; deterministic analysis status: recommended.
RMIT University has 5 source-backed public claims for coursework; deterministic analysis status: restricted.
RMIT University has 5 source-backed public claims for exams; deterministic analysis status: restricted.
RMIT University has 3 source-backed public claims for privacy and data entry; deterministic analysis status: restricted.
RMIT University has 3 source-backed public claims for academic integrity; deterministic analysis status: restricted.
RMIT University has 3 source-backed public claims for approved tools; deterministic analysis status: restricted.
RMIT University has 4 source-backed public claims for named ai services; deterministic analysis status: restricted.
No source-backed public claim about teaching guidance is present in this profile.
The current public tracker record does not contain claim evidence about instructor, classroom, assessment-design, or syllabus guidance.
No source-backed public claim about research AI use is present in this profile.
The current public tracker record does not contain claim evidence about research use, publication ethics, research data, grants, or human-subjects compliance.
RMIT University has 1 source-backed public claim for security and procurement; deterministic analysis status: restricted.
Coverage score measures breadth of public, source-backed coverage only. It is not a policy quality, strictness, legal adequacy, safety, or compliance score.
9 reviewed evidence-backed public claim
Source Status
Normalized value: responsible_ai_procedure_risk_based_framework
Original evidence
Evidence 1This procedure establishes the ethical principles and risk-based framework which underpin the safe and responsible adoption of AI to support the functions and activities of the RMIT Group.
Academic Integrity
Normalized value: ai_use_not_allowed_without_permission_reference_understand_explain
Original evidence
Evidence 1You cannot use AI to: Complete or contribute to an assessment task when it has not been specifically allowed. Check your course guide or ask your educator if you're not sure; Produce ideas that you don't reference and try to pass off as your own; Produce content that you are unable to understand or explain in your own words.
Security Review
Normalized value: pia_sra_tpra_before_ai_governance_framework
Original evidence
Evidence 1Existing risks assessments, indicated below, must be completed before the AI Governance Framework is initiated. These assessments will also determine if the initiative is introducing an AI component(s): Privacy Impact Assessment(PIA); Security Risk Assessment (SRA); Third-Party Risk Assessment (TPRA).
Academic Integrity
Normalized value: fact_check_and_acknowledge_ai_assessment_contributions
Original evidence
Evidence 1You should always review and fact check AI outputs and acknowledge the tool just as you would any content, ideas or media that you use in your assessments. Just like when we reference books, articles or websites in our assessments, if an AI tool has contributed ideas, text, images or other content to your work, you need to acknowledge it.
Academic Integrity
Normalized value: assessment_ai_use_depends_course_course_coordinator
Original evidence
Evidence 1Can I use AI for my assessments? This depends on your course and instructions from your Course Coordinator. If you're unsure, always check with your Course Coordinator.
Ai Tool Treatment
Normalized value: val_supported_genai_tool_course_guides_assessment_use
Original evidence
Evidence 1Val is RMIT's generative artificial intelligence tool - powered by OpenAI's GPT-4.1, GPT-4o and o3-mini models. Val is private, secure and free for RMIT students to use. Your course guides will provide guidance on how you can use Val or other AI tools in your learning, including whether or not it's appropriate to use AI tools in your assessments.
Privacy
Normalized value: val_data_private_not_shared_external_no_sensitive_prompts
Original evidence
Evidence 1Any data you share with Val is also kept private, secure and confidential - it's not shared with OpenAI, or any other external organisations outside RMIT. Do not enter prompts that contain: Information that you would not usually share with other people; Personal, sensitive and health information about yourself or anyone else.
Ai Tool Treatment
Normalized value: approved_ai_tools_public_tools_caution
Original evidence
Evidence 1RMIT has a list of approved tools with AI capability and students should be cautious with using any public tools outside of that.
Privacy
Normalized value: verify_ai_outputs_public_ai_privacy_security_risk
Original evidence
Evidence 1Always verify the information you receive from AI and make sure you can understand it and explain it in your own words. Treat public AI tools like a megaphone - anything you input could be shared and be a risk to your privacy, security and reputation.
0 machine or needs-review claim
Candidate claims are not final policy conclusions. They preserve source URL, source snapshot hash, evidence, confidence, and review state so the record can be audited before review.
5 source attribution
rmit.edu.au
rmit.edu.au
policies.rmit.edu.au
rmit.edu.au
rmit.edu.au
Source-check timeline and diff-style claim/evidence preview.
View the public change record for this university, including source snapshot hashes, claim review states, and a diff-style preview of current source-backed evidence.
Corrections create review tasks and do not directly change this public record.
If an official source is missing, stale, moved, blocked, or incorrectly summarized, submit a source URL, policy change report, or institution correction for review. Corrections must preserve source URLs, source language, original evidence, review state, and audit history.