11 # McGill University AI policy record
2+ai_tool_treatment: McGill lists Microsoft 365 Copilot Chat as an available AI tool for staff, faculty, and students, and says a secure version with enterprise data protection is available for all McGill users.
3+Evidence (en, 8a89030dff58): Microsoft 365 Copilot Chat is an AI-powered feature integrated into Microsoft Edge and accessible through other browsers. Microsoft 365 Copilot Chat can answer questions, generate content, condense long texts and more. A secure version with enterprise data protection is available for all McGill users​. Audience: Staff, Faculty, and Students Price: Free
4+ai_tool_treatment: McGill explicitly rejects DeepSeek AI for McGill-managed or research-funded devices, rejects Read.AI and other AI meeting bots for McGill use, and says tools not mentioned in the available AI tools list are automatically considered rejected.
5+Evidence (en, 8a89030dff58): If a tool is not mentioned in the "Available AI tools" list, it is automatically considered rejected , even if it is not listed among these prohibited tools DeepSeek AI: This tool has raised serious data exposure risks and prompt injection vulnerabilities. Its use is not permitted for any McGill-managed or research-funded device. This decision follows cybersecurity directives from the Government of Quebec and the Government of Canada.
6+privacy: McGill guidance says users should mitigate potential privacy concerns by removing personally identifying information when using AI tools, be careful with sensitive or restricted material, and avoid using Personal Health Information (PHI) or Payment Card Industry (PCI) data with AI tools.
7+Evidence (en, 65240bfc8b00): Mitigate potential privacy concerns by removing personally identifying information (e.g., names, email addresses, phone numbers). For example, when writing a prompt to draft an email to Joe Smith, replace "Joe Smith" with "XYZ."
8+academic_integrity: McGill's Provost-endorsed principles state that instructors remain responsible for comporting themselves according to the highest standards of academic integrity in their use of generative AI tools. Instructors must be explicit in course outlines about the expectations for use of generative AI tools and may set limits on their use in assessment tasks.
9+Evidence (en, bf264889e0f8): Fourth principle: Instructors remain responsible for comporting themselves according to the highest standards of academic integrity in their use of generative AI tools. Instructors maintain responsibility and accountability for all of their instructional materials whether independently created, third-party generated, supported by generative AI tools, or derived from other resources. Instructors must be explicit in course outlines about the expectations for use of generative AI tools and may set limits on their use in assessment tasks.
10+teaching: McGill recommends that instructors explain to students in their course outline what the appropriate use or non-use is of generative AI tools in the context of that course. The use or non-use of these tools should align with the learning outcomes associated with the course.
11+Evidence (en, d842d0ad67dd): There should be no default assumption as to the use of generative AI tools. Therefore, McGill recommends that instructors explain to students in their course outline what the appropriate use or non-use is of generative AI tools in the context of that course. The use or non-use of these tools should align with the learning outcomes associated with the course. For this reason, instructors will need to write their own context-appropriate course outline statements.