Nathan, Australia

Griffith University

Griffith University is listed as QS 2026 rank 268. Griffith University has 6 source-backed AI policy claim records from 3 official source attributions. The public record preserves original-language evidence snippets, source URLs, snapshot hashes, confidence, and review state.

Short answer

v1 public contract

Griffith University is listed as QS 2026 rank 268. Griffith University has 6 source-backed AI policy claim records from 3 official source attributions. The public record preserves original-language evidence snippets, source URLs, snapshot hashes, confidence, and review state.

Citation-ready summary

As of this public record, University AI Policy Tracker lists Griffith University as an agent-reviewed AI policy record last checked on May 15, 2026 and last changed on May 15, 2026. The record contains 6 source-backed claims, including 6 reviewed claims, from 3 official source attributions. Original-language evidence snippets and source URLs remain canonical, with public JSON available at https://eduaipolicy.org/api/public/v1/universities/griffith-university.json. The entity-level confidence is 94%. This tracker is not legal advice, not academic integrity advice, and not an official university statement unless the linked source is the university's own official page.

Claim coverage6 reviewedSource languageenPublic JSON/api/public/v1/universities/griffith-university.json

Policy signals in this record

  • Evidence includes Academic integrity claims.
  • Evidence includes AI tool treatment claims.
  • Evidence includes Research claims.
  • Evidence includes Privacy claims.
  • Named AI services detected in public claims: ChatGPT, DeepSeek, Microsoft Copilot, Claude, Gemini.
  • Disclosure, acknowledgment, citation, or attribution language appears in the public claim text.
  • Teaching, assessment, coursework, or syllabus-related language appears in the public claim text.
  • Privacy, sensitive-data, or security language appears in the public claim text.
Policy statusReviewed evidence-backed recordReview: Agent reviewedEvidence-backed claims6Reviewed6Candidate0Official sources3

This reference record summarizes visible public data only. Official sources and original-language evidence remain canonical; confidence is separate from review state.

This page is not legal advice, not academic integrity advice, and not an official university statement unless a linked source is the university's own official page.

Policy profile

Deterministic source-backed dimensions derived from this record's public claims.

Coverage score90/100Coverage labelbroad public coverageReview: Machine candidateAnalysis confidence78%

Policy profile rows are machine-candidate derived metadata. They are not final policy conclusions; inspect the linked claim evidence before reuse.

Analysis page-quality metadata is available at /api/public/v1/analysis/page-quality.json.

Teaching guidance

No source-backed public claim about teaching guidance is present in this profile.

The current public tracker record does not contain claim evidence about instructor, classroom, assessment-design, or syllabus guidance.

Not MentionedMachine candidateConfidence0%Evidence0Sources0

Security and procurement

No source-backed public claim about AI security review or procurement is present in this profile.

The current public tracker record does not contain claim evidence about security review, procurement, vendor approval, risk assessment, authentication, SSO, or enterprise licensing.

Not MentionedMachine candidateConfidence0%Evidence0Sources0

Coverage score measures breadth of public, source-backed coverage only. It is not a policy quality, strictness, legal adequacy, safety, or compliance score.

Evidence-backed claims

6 reviewed evidence-backed public claim

Academic Integrity

Griffith's academic integrity page states that uncited content created using generative artificial intelligence software is conduct that always constitutes Academic Misconduct, and that representing AI-generated assessment content as a student's own work is Academic Misconduct.

Review: Agent reviewedConfidence94%

Normalized value: uncited_ai_content_academic_misconduct

Original evidence

Evidence 1
Types of conduct that always constitute Academic Misconduct include: uncited content created using generative artificial intelligence software. Uncited content created by generative artificial intelligence (AI) software involves using AI tools, such as ChatGPT, which can be used to generate content for assessment items. Representing AI-generated content as a student's own work is Academic Misconduct.

Ai Tool Treatment

Griffith eResearch guidance says ChatGPT, Claude, Gemini, Grok and other AI chat tools are not individually approved, but may be used for research purposes involving Public Data only, and says DeepSeek is banned at Griffith University.

Review: Agent reviewedConfidence93%

Normalized value: other_chat_tools_public_data_research_only_deepseek_banned

Original evidence

Evidence 1
Other AI Chat Tools: CHATGPT, Claude, Gemini, Grok and other AI chat tools are currently not individually approved, they can still be utilized for research purposes provided they involve Public Data only. Please note: DeepSeek is banned at Griffith University.

Research

Griffith eResearch guidance says Microsoft Copilot is approved for Unofficial, Official (Public), Official (Internal), and Sensitive data classifications when users log in with a Griffith account, while Protected data is not approved for Microsoft Copilot.

Review: Agent reviewedConfidence92%

Normalized value: research_copilot_approved_through_sensitive_not_protected

Original evidence

Evidence 1
Microsoft Copilot is approved for the following data classifications. You must log in with your Griffith account to ensure data safety. Data safety is not guaranteed without logging in your Griffith account. Unofficial; Official (Public); Official (Internal); Sensitive. Protected data is NOT approved for use in Microsoft Copilot.

Privacy

Griffith's student generative AI guidance warns that many open or public tools do not guarantee confidentiality and advises users not to enter personal information, information about others, or course materials such as assessment tasks or marking rubrics.

Review: Agent reviewedConfidence91%

Normalized value: avoid_personal_other_people_and_course_materials_in_public_ai_tools

Original evidence

Evidence 1
Many generative AI tools, especially open or public ones do not guarantee the confidentiality of the information you enter. That means anything you type in could be stored, reused or shared. To protect yourself and others, do not enter: your own personal information; information about others; course materials.

Academic Integrity

Griffith's student-facing generative AI guidance says students may use generative AI for self-study without citation, but for assessment tasks they must acknowledge its use and include a short declaration if they use it while completing the task.

Review: Agent reviewedConfidence90%

Normalized value: assessment_ai_use_acknowledgement_required

Original evidence

Evidence 1
Generative AI can be used for self-study without citation. However, for assessments you must acknowledge its use and ensure the accuracy and quality of your submissions. If you use generative AI at any point in completing your assessment tasks, make sure you include a short declaration with your submission explaining how you used it.

Ai Tool Treatment

Griffith says it provides free access to Microsoft Copilot for staff and students and describes it as the preferred generative AI tool because signed-in Griffith-account use operates as a closed system with protected prompts and responses.

Review: Agent reviewedConfidence88%

Normalized value: microsoft_copilot_preferred_supported_tool

Original evidence

Evidence 1
Griffith provides free access to Microsoft Copilot for staff and students. It is the preferred tool because it operates as a closed system. When you sign in with your Griffith credentials, your data is protected. This means your prompts and responses are not used to train the model and are only visible in your own chat history.

Candidate claims

0 machine or needs-review claim

Candidate claims are not final policy conclusions. They preserve source URL, source snapshot hash, evidence, confidence, and review state so the record can be audited before review.

Official sources

3 source attribution

Change log

Source-check timeline and diff-style claim/evidence preview.

View the public change record for this university, including source snapshot hashes, claim review states, and a diff-style preview of current source-backed evidence.

Last checkedMay 15, 2026Last changedMay 15, 2026Open change log

Corrections and missing evidence

Corrections create review tasks and do not directly change this public record.

If an official source is missing, stale, moved, blocked, or incorrectly summarized, submit a source URL, policy change report, or institution correction for review. Corrections must preserve source URLs, source language, original evidence, review state, and audit history.

Back to universities