Melbourne, Australia

RMIT University

RMIT University is listed as QS 2026 rank 125. RMIT University has 9 source-backed AI policy claim records from 5 official source attributions. The public record preserves original-language evidence snippets, source URLs, snapshot hashes, confidence, and review state.

Short answer

v1 public contract

RMIT University is listed as QS 2026 rank 125. RMIT University has 9 source-backed AI policy claim records from 5 official source attributions. The public record preserves original-language evidence snippets, source URLs, snapshot hashes, confidence, and review state.

Citation-ready summary

As of this public record, University AI Policy Tracker lists RMIT University as an agent-reviewed AI policy record last checked on May 14, 2026 and last changed on May 14, 2026. The record contains 9 source-backed claims, including 9 reviewed claims, from 5 official source attributions. Original-language evidence snippets and source URLs remain canonical, with public JSON available at https://eduaipolicy.org/api/public/v1/universities/rmit-university.json. The entity-level confidence is 95%. This tracker is not legal advice, not academic integrity advice, and not an official university statement unless the linked source is the university's own official page.

Claim coverage9 reviewedSource languageenPublic JSON/api/public/v1/universities/rmit-university.json

Policy signals in this record

  • Evidence includes Source status claims.
  • Evidence includes Academic integrity claims.
  • Evidence includes Security review claims.
  • Evidence includes AI tool treatment claims.
  • Evidence includes Privacy claims.
  • No specific AI service name is highlighted by the current public claim text.
  • Disclosure, acknowledgment, citation, or attribution language appears in the public claim text.
  • Teaching, assessment, coursework, or syllabus-related language appears in the public claim text.
Policy statusReviewed evidence-backed recordReview: Agent reviewedEvidence-backed claims9Reviewed9Candidate0Official sources5

This reference record summarizes visible public data only. Official sources and original-language evidence remain canonical; confidence is separate from review state.

This page is not legal advice, not academic integrity advice, and not an official university statement unless a linked source is the university's own official page.

Policy profile

Deterministic source-backed dimensions derived from this record's public claims.

Coverage score90/100Coverage labelbroad public coverageReview: Machine candidateAnalysis confidence80%

Policy profile rows are machine-candidate derived metadata. They are not final policy conclusions; inspect the linked claim evidence before reuse.

Analysis page-quality metadata is available at /api/public/v1/analysis/page-quality.json.

Teaching guidance

No source-backed public claim about teaching guidance is present in this profile.

The current public tracker record does not contain claim evidence about instructor, classroom, assessment-design, or syllabus guidance.

Not MentionedMachine candidateConfidence0%Evidence0Sources0

Research guidance

No source-backed public claim about research AI use is present in this profile.

The current public tracker record does not contain claim evidence about research use, publication ethics, research data, grants, or human-subjects compliance.

Not MentionedMachine candidateConfidence0%Evidence0Sources0

Security and procurement

RMIT University has 1 source-backed public claim for security and procurement; deterministic analysis status: restricted.

RestrictedMachine candidateConfidence80%Evidence1Sources1

Coverage score measures breadth of public, source-backed coverage only. It is not a policy quality, strictness, legal adequacy, safety, or compliance score.

Evidence-backed claims

9 reviewed evidence-backed public claim

Source Status

RMIT has a Responsible Artificial Intelligence procedure that establishes ethical principles and a risk-based framework for safe and responsible AI adoption across RMIT Group functions and activities.

Review: Agent reviewedConfidence95%

Normalized value: responsible_ai_procedure_risk_based_framework

Original evidence

Evidence 1
This procedure establishes the ethical principles and risk-based framework which underpin the safe and responsible adoption of AI to support the functions and activities of the RMIT Group.

Academic Integrity

RMIT academic integrity guidance says students cannot use AI to complete or contribute to an assessment task unless specifically allowed, pass off unreferenced AI-produced ideas as their own, or submit AI-produced content they cannot understand or explain.

Review: Agent reviewedConfidence95%

Normalized value: ai_use_not_allowed_without_permission_reference_understand_explain

Original evidence

Evidence 1
You cannot use AI to: Complete or contribute to an assessment task when it has not been specifically allowed. Check your course guide or ask your educator if you're not sure; Produce ideas that you don't reference and try to pass off as your own; Produce content that you are unable to understand or explain in your own words.

Security Review

Before RMIT initiates its AI Governance Framework for an AI initiative, existing governance assessments include Privacy Impact Assessment, Security Risk Assessment, and Third-Party Risk Assessment.

Review: Agent reviewedConfidence94%

Normalized value: pia_sra_tpra_before_ai_governance_framework

Original evidence

Evidence 1
Existing risks assessments, indicated below, must be completed before the AI Governance Framework is initiated. These assessments will also determine if the initiative is introducing an AI component(s): Privacy Impact Assessment(PIA); Security Risk Assessment (SRA); Third-Party Risk Assessment (TPRA).

Academic Integrity

RMIT assessment guidance tells students to review and fact-check AI outputs and acknowledge AI tools when those tools contribute ideas, text, images, or other content to assessment work.

Review: Agent reviewedConfidence94%

Normalized value: fact_check_and_acknowledge_ai_assessment_contributions

Original evidence

Evidence 1
You should always review and fact check AI outputs and acknowledge the tool just as you would any content, ideas or media that you use in your assessments. Just like when we reference books, articles or websites in our assessments, if an AI tool has contributed ideas, text, images or other content to your work, you need to acknowledge it.

Academic Integrity

RMIT says whether students may use AI in assessments depends on their course and Course Coordinator instructions, and students should check with the Course Coordinator if unsure.

Review: Agent reviewedConfidence94%

Normalized value: assessment_ai_use_depends_course_course_coordinator

Original evidence

Evidence 1
Can I use AI for my assessments? This depends on your course and instructions from your Course Coordinator. If you're unsure, always check with your Course Coordinator.

Ai Tool Treatment

RMIT describes Val as its generative AI tool for eligible students, powered by OpenAI models, and says course guides determine whether Val or other AI tools are appropriate for learning and assessment.

Review: Agent reviewedConfidence94%

Normalized value: val_supported_genai_tool_course_guides_assessment_use

Original evidence

Evidence 1
Val is RMIT's generative artificial intelligence tool - powered by OpenAI's GPT-4.1, GPT-4o and o3-mini models. Val is private, secure and free for RMIT students to use. Your course guides will provide guidance on how you can use Val or other AI tools in your learning, including whether or not it's appropriate to use AI tools in your assessments.

Privacy

RMIT's Val guidance says data shared with Val is kept private, secure and confidential, is not shared with OpenAI or other external organisations outside RMIT, and should not include personal, sensitive, or health information.

Review: Agent reviewedConfidence94%

Normalized value: val_data_private_not_shared_external_no_sensitive_prompts

Original evidence

Evidence 1
Any data you share with Val is also kept private, secure and confidential - it's not shared with OpenAI, or any other external organisations outside RMIT. Do not enter prompts that contain: Information that you would not usually share with other people; Personal, sensitive and health information about yourself or anyone else.

Ai Tool Treatment

RMIT student AI learning guidance says RMIT has approved tools with AI capability and that students should be cautious when using public AI tools outside that approved set.

Review: Agent reviewedConfidence92%

Normalized value: approved_ai_tools_public_tools_caution

Original evidence

Evidence 1
RMIT has a list of approved tools with AI capability and students should be cautious with using any public tools outside of that.

Privacy

RMIT tells students to verify AI outputs and treat public AI tools as privacy, security, and reputation risks because inputs could be shared.

Review: Agent reviewedConfidence91%

Normalized value: verify_ai_outputs_public_ai_privacy_security_risk

Original evidence

Evidence 1
Always verify the information you receive from AI and make sure you can understand it and explain it in your own words. Treat public AI tools like a megaphone - anything you input could be shared and be a risk to your privacy, security and reputation.

Candidate claims

0 machine or needs-review claim

Candidate claims are not final policy conclusions. They preserve source URL, source snapshot hash, evidence, confidence, and review state so the record can be audited before review.

Official sources

5 source attribution

Change log

Source-check timeline and diff-style claim/evidence preview.

View the public change record for this university, including source snapshot hashes, claim review states, and a diff-style preview of current source-backed evidence.

Last checkedMay 14, 2026Last changedMay 14, 2026Open change log

Corrections and missing evidence

Corrections create review tasks and do not directly change this public record.

If an official source is missing, stale, moved, blocked, or incorrectly summarized, submit a source URL, policy change report, or institution correction for review. Corrections must preserve source URLs, source language, original evidence, review state, and audit history.

Back to universities