Change log

McGill University

Source-check timeline, source snapshot hashes, claim review state, and a diff-style preview of current source-backed claim evidence.

Change summary

Current public record freshness and review state.

McGill University currently has 5 source-backed claim records and 5 official source attributions. Latest tracked changed date: May 10, 2026.

This tracker is not legal advice, not academic integrity advice, and not an official university statement unless a linked source is the university's own official page.

Claim/evidence diff preview

Diff-style preview built from current public claim/evidence records. Full old/new source diffs require paired historical snapshots.

McGill University current policy evidence

Inserted lines represent current public claim and evidence records in the source-backed dataset.

+10-0
11 # McGill University AI policy record
2+ai_tool_treatment: McGill lists Microsoft 365 Copilot Chat as an available AI tool for staff, faculty, and students, and says a secure version with enterprise data protection is available for all McGill users.
3+Evidence (en, 8a89030dff58): Microsoft 365 Copilot Chat is an AI-powered feature integrated into Microsoft Edge and accessible through other browsers. Microsoft 365 Copilot Chat can answer questions, generate content, condense long texts and more. A secure version with enterprise data protection is available for all McGill users​. Audience: Staff, Faculty, and Students Price: Free
4+ai_tool_treatment: McGill explicitly rejects DeepSeek AI for McGill-managed or research-funded devices, rejects Read.AI and other AI meeting bots for McGill use, and says tools not mentioned in the available AI tools list are automatically considered rejected.
5+Evidence (en, 8a89030dff58): If a tool is not mentioned in the "Available AI tools" list, it is automatically considered rejected , even if it is not listed among these prohibited tools DeepSeek AI: This tool has raised serious data exposure risks and prompt injection vulnerabilities. Its use is not permitted for any McGill-managed or research-funded device. This decision follows cybersecurity directives from the Government of Quebec and the Government of Canada.
6+privacy: McGill guidance says users should mitigate potential privacy concerns by removing personally identifying information when using AI tools, be careful with sensitive or restricted material, and avoid using Personal Health Information (PHI) or Payment Card Industry (PCI) data with AI tools.
7+Evidence (en, 65240bfc8b00): Mitigate potential privacy concerns by removing personally identifying information (e.g., names, email addresses, phone numbers). For example, when writing a prompt to draft an email to Joe Smith, replace "Joe Smith" with "XYZ."
8+academic_integrity: McGill's Provost-endorsed principles state that instructors remain responsible for comporting themselves according to the highest standards of academic integrity in their use of generative AI tools. Instructors must be explicit in course outlines about the expectations for use of generative AI tools and may set limits on their use in assessment tasks.
9+Evidence (en, bf264889e0f8): Fourth principle: Instructors remain responsible for comporting themselves according to the highest standards of academic integrity in their use of generative AI tools. Instructors maintain responsibility and accountability for all of their instructional materials whether independently created, third-party generated, supported by generative AI tools, or derived from other resources. Instructors must be explicit in course outlines about the expectations for use of generative AI tools and may set limits on their use in assessment tasks.
10+teaching: McGill recommends that instructors explain to students in their course outline what the appropriate use or non-use is of generative AI tools in the context of that course. The use or non-use of these tools should align with the learning outcomes associated with the course.
11+Evidence (en, d842d0ad67dd): There should be no default assumption as to the use of generative AI tools. Therefore, McGill recommends that instructors explain to students in their course outline what the appropriate use or non-use is of generative AI tools in the context of that course. The use or non-use of these tools should align with the learning outcomes associated with the course. For this reason, instructors will need to write their own context-appropriate course outline statements.

Claim changes

5 claim records

teaching

McGill recommends that instructors explain to students in their course outline what the appropriate use or non-use is of generative AI tools in the context of that course. The use or non-use of these tools should align with the learning outcomes associated with the course.

Review: Agent reviewedConfidence95%Evidence1Languagesen

ai_tool_treatment

McGill explicitly rejects DeepSeek AI for McGill-managed or research-funded devices, rejects Read.AI and other AI meeting bots for McGill use, and says tools not mentioned in the available AI tools list are automatically considered rejected.

Review: Agent reviewedConfidence98%Evidence1Languagesen

academic_integrity

McGill's Provost-endorsed principles state that instructors remain responsible for comporting themselves according to the highest standards of academic integrity in their use of generative AI tools. Instructors must be explicit in course outlines about the expectations for use of generative AI tools and may set limits on their use in assessment tasks.

Review: Agent reviewedConfidence96%Evidence1Languagesen

ai_tool_treatment

McGill lists Microsoft 365 Copilot Chat as an available AI tool for staff, faculty, and students, and says a secure version with enterprise data protection is available for all McGill users.

Review: Agent reviewedConfidence98%Evidence1Languagesen

privacy

McGill guidance says users should mitigate potential privacy concerns by removing personally identifying information when using AI tools, be careful with sensitive or restricted material, and avoid using Personal Health Information (PHI) or Payment Card Industry (PCI) data with AI tools.

Review: Agent reviewedConfidence97%Evidence1Languagesen

Source snapshots

5 source attributions