Haymarket, Australia

University of Technology Sydney

University of Technology Sydney is listed as QS 2026 rank 96. University of Technology Sydney has 7 source-backed AI policy claim records from 7 official source attributions. The public record preserves original-language evidence snippets, source URLs, snapshot hashes, confidence, and review state.

Short answer

v1 public contract

University of Technology Sydney is listed as QS 2026 rank 96. University of Technology Sydney has 7 source-backed AI policy claim records from 7 official source attributions. The public record preserves original-language evidence snippets, source URLs, snapshot hashes, confidence, and review state.

Citation-ready summary

As of this public record, University AI Policy Tracker lists University of Technology Sydney as an agent-reviewed AI policy record last checked on May 14, 2026 and last changed on May 14, 2026. The record contains 7 source-backed claims, including 7 reviewed claims, from 7 official source attributions. Original-language evidence snippets and source URLs remain canonical, with public JSON available at https://eduaipolicy.org/api/public/v1/universities/university-of-technology-sydney.json. The entity-level confidence is 96%. This tracker is not legal advice, not academic integrity advice, and not an official university statement unless the linked source is the university's own official page.

Claim coverage7 reviewedSource languageen-AUPublic JSON/api/public/v1/universities/university-of-technology-sydney.json

Policy signals in this record

  • Evidence includes Security review claims.
  • Evidence includes Procurement claims.
  • Evidence includes Research claims.
  • Evidence includes Academic integrity claims.
  • Evidence includes Privacy claims.
  • Evidence includes AI tool treatment claims.
  • Named AI services detected in public claims: Microsoft Copilot.
  • Disclosure, acknowledgment, citation, or attribution language appears in the public claim text.
Policy statusReviewed evidence-backed recordReview: Agent reviewedEvidence-backed claims7Reviewed7Candidate0Official sources7

This reference record summarizes visible public data only. Official sources and original-language evidence remain canonical; confidence is separate from review state.

This page is not legal advice, not academic integrity advice, and not an official university statement unless a linked source is the university's own official page.

Policy profile

Deterministic source-backed dimensions derived from this record's public claims.

Coverage score100/100Coverage labelbroad public coverageReview: Machine candidateAnalysis confidence79%

Policy profile rows are machine-candidate derived metadata. They are not final policy conclusions; inspect the linked claim evidence before reuse.

Analysis page-quality metadata is available at /api/public/v1/analysis/page-quality.json.

Teaching guidance

University of Technology Sydney has 1 source-backed public claim for teaching guidance; deterministic analysis status: recommended.

RecommendedMachine candidateConfidence81%Evidence1Sources1

Coverage score measures breadth of public, source-backed coverage only. It is not a policy quality, strictness, legal adequacy, safety, or compliance score.

Evidence-backed claims

7 reviewed evidence-backed public claim

Security Review

UTS states that use of AI systems must comply with its Privacy, Procurement, Information Security, and Acceptable Use of Information Technology Resources policies as appropriate.

Review: Agent reviewedConfidence96%

Normalized value: ai_system_use_must_comply_privacy_procurement_information_security_acceptable_use_policies

Original evidence

Evidence 1
3.4 Use of AI systems must comply with the Privacy Policy, the Procurement Policy, the Information Security Policy and the Acceptable Use of Information Technology Resources Policy as appropriate.

Localized display only

UTS AI system use must comply with privacy, procurement, information security and acceptable-use policies as appropriate.

Security Review

UTS's AI Operations Procedure requires a six-stage identification, assessment, approval, implementation, and management process before developing, deploying, procuring, or activating AI systems or AI capabilities.

Review: Agent reviewedConfidence96%

Normalized value: six_stage_process_required_before_ai_development_deployment_procurement_activation

Original evidence

Evidence 1
4.3 When considering an AI system, UTS applies a 6-stage process for identification, assessment, approval, implementation and management. These steps must be followed before the development, deployment or procurement of an AI system or activation of an AI capability within an existing system.

Localized display only

The procedure says UTS applies a six-stage process and those steps must be followed before AI development, deployment, procurement or capability activation.

Procurement

UTS's Artificial Intelligence Operations Policy guides the use, procurement, development, and management of AI for teaching, learning, and operations, while directing research use to the separate research guidelines.

Review: Agent reviewedConfidence95%

Normalized value: policy_covers_teaching_learning_operations_ai_procurement_development_management_research_separate_guidelines

Original evidence

Evidence 1
1.1 The Artificial Intelligence Operations Policy (the policy) guides the use, procurement, development and management of artificial intelligence (AI) at UTS for the purposes of teaching, learning and operations. 2.3 This policy does not apply to research projects and outputs (including university consulting). Information on the use of AI in research, as well as required research approvals and ethics clearances, is provided in the Research Policy and the Use of AI in Research Guidelines.

Localized display only

The policy covers AI use/procurement/development/management for teaching, learning and operations, while research use is handled through research policy and AI research guidelines.

Research

UTS research guidance tells researchers to check confidentiality, licence, and agreement restrictions before uploading data to GenAI tools, and to follow relevant IT, Acceptable Use, and Data Governance policies when a GenAI tool accesses UTS IT or network resources or is deployed on a UTS device.

Review: Agent reviewedConfidence94%

Normalized value: researchers_check_confidentiality_restrictions_and_follow_it_acceptable_use_data_governance_for_genai_tools

Original evidence

Evidence 1
Maintain confidentiality Are you breaching confidentiality (ethical, cultural or commercial) or licences or agreements by uploading the data into a GenAI tool? Check for restrictions in contracts, agreements or licences. If you are considering using a GenAI tool that will have access to UTS IT or network resources, or be deployed on a UTS device, you must follow relevant IT policies, including the Information Security Policy, the Acceptable Use of Information Technology Resources Policy and the Data Governance Policy.

Localized display only

The research guidelines ask researchers to check confidentiality and contractual restrictions before uploading data, and require relevant IT/data policies for GenAI tools connected to UTS IT or devices.

Academic Integrity

UTS Education Express says university misconduct rules apply to AI use in assessments, students must acknowledge AI-tool use, and students should only use AI tools to generate verbatim assessment materials when instructed that this is appropriate.

Review: Agent reviewedConfidence93%

Normalized value: assessment_ai_use_subject_to_misconduct_rules_acknowledgement_required_verbatim_materials_only_when_instructed

Original evidence

Evidence 1
University misconduct rules apply to the use of AI in assessments, students must acknowledge their use of these tools and only use them to generate verbatim materials for assessment when instructed that this is appropriate.

Localized display only

UTS teaching guidance links AI assessment use to misconduct rules, acknowledgement, and instructor permission for verbatim AI-generated assessment material.

Privacy

UTS Library's GenAI guide tells students that if GenAI use has not been allowed by the subject coordinator it is academic misconduct, allowed GenAI content must be referenced or acknowledged, and students should not enter personal details, confidential data, assignment text, or research text into GenAI tools.

Review: Agent reviewedConfidence91%

Normalized value: library_students_ai_not_allowed_is_misconduct_allowed_content_referenced_acknowledged_no_personal_confidential_assignment_research_text

Original evidence

Evidence 1
If the use of GenAI has not been allowed by your subject coordinator, using this content in your assignment is considered academic misconduct (cheating). Even when the use of GenAI is allowed, any content that you use must be appropriately referenced or acknowledged. Privacy concerns. When you submit content to a GenAI tool, you give them the right to re-use and distribute this content. So it is important not to enter any personal details, confidential data or text from your assignments or research.

Localized display only

UTS Library says unallowed GenAI assignment use is misconduct, allowed content must be referenced or acknowledged, and personal/confidential/assignment/research text should not be entered into GenAI tools.

Ai Tool Treatment

UTS Library says UTS staff and students have access to Microsoft Copilot, and describes logged-in Copilot as a protected version where data is not used to train the AI and files and intellectual property are safe; the guide calls it the preferred tool for learning support.

Review: Agent reviewedConfidence90%

Normalized value: library_copilot_access_for_staff_students_logged_in_protected_version_preferred_learning_support_tool

Original evidence

Evidence 1
UTS staff and students have access to Microsoft Copilot. Copilot is a GenAI tool with GPT-4o and DALL-E that generates text or image content based on your prompts. It's free and easy to use, and if you log in with your UTS ID and password, you are able to access a protected version. This means that your data isn't being used to train the AI and your files and intellectual property are safe. If you want to use generative AI to support your learning, this is the preferred tool.

Localized display only

UTS Library identifies Microsoft Copilot access for staff/students and says the logged-in protected version does not use data to train AI; the guide calls it the preferred learning-support tool.

Candidate claims

0 machine or needs-review claim

Candidate claims are not final policy conclusions. They preserve source URL, source snapshot hash, evidence, confidence, and review state so the record can be audited before review.

Official sources

7 source attribution

Change log

Source-check timeline and diff-style claim/evidence preview.

View the public change record for this university, including source snapshot hashes, claim review states, and a diff-style preview of current source-backed evidence.

Last checkedMay 14, 2026Last changedMay 14, 2026Open change log

Corrections and missing evidence

Corrections create review tasks and do not directly change this public record.

If an official source is missing, stale, moved, blocked, or incorrectly summarized, submit a source URL, policy change report, or institution correction for review. Corrections must preserve source URLs, source language, original evidence, review state, and audit history.

Back to universities