11 # Griffith University AI policy record
2+academic_integrity: Griffith's academic integrity page states that uncited content created using generative artificial intelligence software is conduct that always constitutes Academic Misconduct, and that representing AI-generated assessment content as a student's own work is Academic Misconduct.
3+Evidence (en, 3deae0de4aa6): Types of conduct that always constitute Academic Misconduct include: uncited content created using generative artificial intelligence software. Uncited content created by generative artificial intelligence (AI) software involves using AI tools, such as ChatGPT, which can be used to generate content for assessment items. Representing AI-generated content as a student's own work is Academic Misconduct.
4+ai_tool_treatment: Griffith eResearch guidance says ChatGPT, Claude, Gemini, Grok and other AI chat tools are not individually approved, but may be used for research purposes involving Public Data only, and says DeepSeek is banned at Griffith University.
5+Evidence (en, 4c5dffaef90d): Other AI Chat Tools: CHATGPT, Claude, Gemini, Grok and other AI chat tools are currently not individually approved, they can still be utilized for research purposes provided they involve Public Data only. Please note: DeepSeek is banned at Griffith University.
6+research: Griffith eResearch guidance says Microsoft Copilot is approved for Unofficial, Official (Public), Official (Internal), and Sensitive data classifications when users log in with a Griffith account, while Protected data is not approved for Microsoft Copilot.
7+Evidence (en, 4c5dffaef90d): Microsoft Copilot is approved for the following data classifications. You must log in with your Griffith account to ensure data safety. Data safety is not guaranteed without logging in your Griffith account. Unofficial; Official (Public); Official (Internal); Sensitive. Protected data is NOT approved for use in Microsoft Copilot.
8+privacy: Griffith's student generative AI guidance warns that many open or public tools do not guarantee confidentiality and advises users not to enter personal information, information about others, or course materials such as assessment tasks or marking rubrics.
9+Evidence (en, a165a17062e6): Many generative AI tools, especially open or public ones do not guarantee the confidentiality of the information you enter. That means anything you type in could be stored, reused or shared. To protect yourself and others, do not enter: your own personal information; information about others; course materials.
10+academic_integrity: Griffith's student-facing generative AI guidance says students may use generative AI for self-study without citation, but for assessment tasks they must acknowledge its use and include a short declaration if they use it while completing the task.
11+Evidence (en, a165a17062e6): Generative AI can be used for self-study without citation. However, for assessments you must acknowledge its use and ensure the accuracy and quality of your submissions. If you use generative AI at any point in completing your assessment tasks, make sure you include a short declaration with your submission explaining how you used it.
12+ai_tool_treatment: Griffith says it provides free access to Microsoft Copilot for staff and students and describes it as the preferred generative AI tool because signed-in Griffith-account use operates as a closed system with protected prompts and responses.
13+Evidence (en, a165a17062e6): Griffith provides free access to Microsoft Copilot for staff and students. It is the preferred tool because it operates as a closed system. When you sign in with your Griffith credentials, your data is protected. This means your prompts and responses are not used to train the model and are only visible in your own chat history.