Change log

Queen's University Belfast

Source-check timeline, source snapshot hashes, claim review state, and a diff-style preview of current source-backed claim evidence.

Change summary

Current public record freshness and review state.

Queen's University Belfast currently has 12 source-backed claim records and 8 official source attributions. Latest tracked changed date: May 15, 2026.

This tracker is not legal advice, not academic integrity advice, and not an official university statement unless a linked source is the university's own official page.

Claim/evidence diff preview

Diff-style preview built from current public claim/evidence records. Full old/new source diffs require paired historical snapshots.

Queen's University Belfast current policy evidence

Inserted lines represent current public claim and evidence records in the source-backed dataset.

+20-0
11 # Queen's University Belfast AI policy record
2+research: Queen's responsible AI research guidance applies to staff, postgraduate research students, visiting researchers, and contractors conducting research under the auspices of the University.
3+Evidence (en, 2d6465b720d2): This guidance applies to: All Queen’s researchers (staff, postgraduate research students, visiting researchers, and contractors) conducting research under the auspices of the University. All types of research activities across the research lifecycle including planning, funding proposal development, data collection and analysis, publication and dissemination, peer review and evaluation, and research management.
4+academic_integrity: Queen's assessment guidance says module staff will clarify if and how AI can be used in assessment, and students should consult their tutor if in doubt.
5+Evidence (en, 6e94d9c097da): This academic year, those delivering modules will clarify if and how AI can be used when completing assessment. If students have any doubt about how AI can be used, they should consult with their tutor.
6+teaching: Queen's AI position page says its staff and student AI guidance is based on RAISE principles: responsible use, AI best practice, integrity, support, and equitable access.
7+Evidence (en, 083dd2d11912): This section brings together tailored QUB guidance for staff and students. This includes recommendations on how to get started as well as more specific guidance on the use of AI, for example within assessment. Our guidance is based on the following principles: R esponsible use, A I best practice, I ntegrity, S upport and E quitable Access – collectively RAISE.
8+academic_integrity: Queen's assessment guidance says students who misuse AI will be subject to the University's academic misconduct regulations.
9+Evidence (en, 6e94d9c097da): Students need to be fully aware of when and how they can use AI in assessments, including any limitations on certain tools or the need to cite or document how AI has been used. If students misuse AI, they will be subject to the University's academic misconduct regulations .
10+research: Queen's responsible AI research guidance expects material AI use in research to be clearly documented and acknowledged, with researchers validating AI-generated content.
11+Evidence (en, 2d6465b720d2): All use of AI in research must be clearly documented and acknowledged, especially where it constitutes material use in the research process/output. Researchers are expected to validate AI-generated content and uphold the standards of academic honesty, avoiding misrepresentation or plagiarism.
12+ai_tool_treatment: Queen's assessment guidance says text-based AI detectors are not recommended because current tools cannot definitively identify AI-authored content and can produce false positives.
13+Evidence (en, 6e94d9c097da): Current tools that attempt to detect AI generated text – whether by analysing writing styles, using machine learning classification, or watermarking – cannot definitively identify AI-authored content. Worryingly, these systems often produce an unacceptably high rate of false positives. In the future, with the integration of AI writing tools into platforms like Microsoft Word and Google Workplace, it is anticipated that much of our writing will include AI-generated elements. This will be similar to how we currently benefit from algorithm-driven spell checkers and grammar tools. Considering these factors, the use of text-based AI detectors is not recommended.
14+research: Queen's research AI guidance says AI use in projects involving human participants, personal data, or sensitive information must be outlined in ethics applications.
15+Evidence (en, 2d6465b720d2): Researchers using AI in projects involving human participants, personal data, or sensitive information must explicitly outline AI usage in their ethics applications. Ethics applications must include clear details about how AI will be used in data collection, analysis, or management, and how participants’ data privacy will be protected.
16+ai_tool_treatment: Queen's tools guidance identifies Microsoft Copilot Chat as available for Queen's University faculty and staff using a @qub.ac.uk email login.
17+Evidence (en, 6b49d4867081): Microsoft Copilot Chat is available for use by Queens University Faculty and staff. To use Copilot, please log in using your @qub.ac.uk email address. Copilot Chat is supported officially on Microsoft Edge and Chrome (using the latest Stable Channel update).
18+privacy: Queen's responsible-use guidance tells users to make an ethical judgment about the information submitted to AI tools and whether they have permission to submit it.
19+Evidence (en, e2098a5912ec): When using an AI tool, it is necessary to make an ethical judgment about the data or information that you put into the system when you use it to complete a task. Any information that is submitted to an AI tool then becomes part of the data that the tool draws upon to complete future tasks for anyone who uses the tool. You need to consider whether you have permission to submit the information that you do.
20+research: Queen's Research Integrity AI page states the fundamental principle that users should not present AI responses as their own and should be clear, open, and transparent in AI use.
21+Evidence (en, bc7e6e5aa699): Whilst work is ongoing within the University to develop guidance on its use, the fundamental principle is NOT to present any responses from AI as if they were your own, be clear, open and transparent in your use.

Claim changes

12 claim records

research

Queen's responsible AI research guidance applies to staff, postgraduate research students, visiting researchers, and contractors conducting research under the auspices of the University.

Review: Agent reviewedConfidence92%Evidence1Languagesen

academic_integrity

Queen's assessment guidance says module staff will clarify if and how AI can be used in assessment, and students should consult their tutor if in doubt.

Review: Agent reviewedConfidence91%Evidence1Languagesen

teaching

Queen's AI position page says its staff and student AI guidance is based on RAISE principles: responsible use, AI best practice, integrity, support, and equitable access.

Review: Agent reviewedConfidence90%Evidence1Languagesen

academic_integrity

Queen's assessment guidance says students who misuse AI will be subject to the University's academic misconduct regulations.

Review: Agent reviewedConfidence90%Evidence1Languagesen

research

Queen's responsible AI research guidance expects material AI use in research to be clearly documented and acknowledged, with researchers validating AI-generated content.

Review: Agent reviewedConfidence90%Evidence1Languagesen

ai_tool_treatment

Queen's assessment guidance says text-based AI detectors are not recommended because current tools cannot definitively identify AI-authored content and can produce false positives.

Review: Agent reviewedConfidence89%Evidence1Languagesen

research

Queen's research AI guidance says AI use in projects involving human participants, personal data, or sensitive information must be outlined in ethics applications.

Review: Agent reviewedConfidence89%Evidence1Languagesen

ai_tool_treatment

Queen's tools guidance identifies Microsoft Copilot Chat as available for Queen's University faculty and staff using a @qub.ac.uk email login.

Review: Agent reviewedConfidence88%Evidence1Languagesen

privacy

Queen's responsible-use guidance tells users to make an ethical judgment about the information submitted to AI tools and whether they have permission to submit it.

Review: Agent reviewedConfidence87%Evidence1Languagesen

research

Queen's Research Integrity AI page states the fundamental principle that users should not present AI responses as their own and should be clear, open, and transparent in AI use.

Review: Agent reviewedConfidence87%Evidence1Languagesen

privacy

Queen's tools guidance says the AI tools listed on the page are for exploration and exclusively with publicly available data.

Review: Agent reviewedConfidence86%Evidence1Languagesen

teaching

Queen's student AI support page provides student-facing resources including guidance on generative AI in studies, academic success, citing AI, acceptable use, and AI confidence.

Review: Agent reviewedConfidence83%Evidence1Languagesen

Source snapshots

8 source attributions