11 # Imperial College London AI policy record
2+academic_integrity: Unless explicitly authorised, using generative AI to create assessed work may be treated as an academic offence such as contract cheating under Imperial's Plagiarism, Academic Integrity & Exam Offences regulations. Improper use of AI can be investigated under the University's Academic Misconduct procedures.
3+Evidence (en, abae555edd55): Unless explicitly authorised, using AI to create assessed work may be treated as an offence such as contract cheating under Imperial's Plagiarism, Academic Integrity & Exam Offences regulations. Familiarise yourself with Imperial's Academic Misconduct Policy for rules on appropriate AI usage.
4+teaching: Individual departments at Imperial may allow or prohibit the use of generative AI for specific assessments. Local (team/department/faculty) instructions take precedence over university-wide guidance. Students should check their department's current policy on using and disclosing generative AI in academic work and follow their module leader's instructions.
5+Evidence (en, af7248595f4e): Individual departments may allow or prohibit GenAI for specific assessments; local (team/department/faculty) instructions take precedence. Always check the brief and acknowledge permitted AI use as directed.
6+privacy: Imperial's dAIsy AI platform uses University SSO authentication with auditing. Prompts and metadata are logged for operational monitoring, and AI model providers are configured not to train on user data. Users' prompts and responses are not used to train external AI models. dAIsy is approved for use with unrestricted data within Imperial's secure infrastructure.
7+Evidence (en, af7248595f4e): dAIsy uses University SSO and auditing. Logs (prompts/metadata) are retained for operational monitoring. Model providers are configured to not train on your data.
8+academic_integrity: Breaches of Imperial's dAIsy Use Policy may lead to action under Academic Misconduct procedures for students and HR/disciplinary processes for staff, as well as under Information Security and Data Protection policies. Sanctions may include removal of access, grade penalties, or formal disciplinary measures.
9+Evidence (en, af7248595f4e): Breaches may lead to action under Academic Misconduct (students) and HR/disciplinary processes (staff), as well as Information Security and Data Protection policies. Sanctions may include removal of access, grade penalties, or formal disciplinary measures.
10+teaching: Students should include a statement acknowledging their use of generative AI tools for all assessed work, specifying the tool name and version, publisher, URL, a brief description of how it was used, and confirmation that the work is their own. Further requirements such as prompts used, date of output, the output obtained, and how it was modified may also be required by individual departments.
11+Evidence (en, abae555edd55): You should include a statement to acknowledge your use of generative AI tools for all assessed work, in accordance with guidelines from your department or course team. This statement should be written in complete sentences and include the following information: Name and version of the generative AI tool e.g. Copilot, ChatGPT-5; Publisher (name of company that provides the AI system) e.g. Microsoft, OpenAI; URL of the AI tool; Brief description (single sentence) of the way in which the tool was used; Confirmation that the work is your own.
12+research: Research at Imperial that involves people, personal data, or sensitive topics may require ethics approval, a Data Protection Impact Assessment (DPIA), and data-governance controls before using any AI tool. Researchers must verify whether their use of AI in research requires special approval, particularly when uploading private or confidential research data.
13+Evidence (en, af7248595f4e): Research that involves people, personal data, or sensitive topics may require ethics approval, a Data Protection Impact Assessment (DPIA), and data-governance controls before using any AI tool.
14+other: All Imperial staff and students have access to Microsoft Copilot with Commercial Data Protection when signed in using their Imperial credentials. Microsoft Copilot has no access to organizational data in the Microsoft 365 Graph. Chat results are not saved or made available to Microsoft, and data does not pass outside the organisation.
15+Evidence (en, 315c865f429a): All Imperial staff and students have access to Microsoft Copilot. You should ensure you sign in to use Copilot so that you are using the secure version of Microsoft Copilot with Commercial Data Protection. MS Copilot has no access to organizational data in the Microsoft 365 Graph. Your data is protected and the chat results are NOT saved or made available for Microsoft, so the data does not pass outside of the organisation.
16+other: Users of Imperial's dAIsy AI platform must always apply critical judgment to AI outputs. Generative AI can produce inaccurate or biased outputs ('hallucinate'), omit context, or reflect training-data biases. Users remain accountable for the accuracy, legality, and appropriateness of any content they submit or share through the platform.
17+Evidence (en, af7248595f4e): Always apply critical judgement. GenAI can 'hallucinate', omit context, or reflect training-data biases. Users remain accountable for the accuracy, legality, and appropriateness of any content they submit or share.
18+other: Users of Imperial's dAIsy platform must not upload third-party content they are not permitted to share. Reuse of AI outputs must comply with licensing and academic citation norms. When communicating externally, dAIsy outputs must not be presented as Imperial's position without approval.
19+Evidence (en, af7248595f4e): Do not upload third-party content you are not permitted to share. Ensure that any reuse of AI outputs complies with licensing and academic citation norms. When communicating externally, do not present dAIsy outputs as Imperial's position without approval.
20+teaching: Imperial College London has established five Generative AI Principles (aligned with Imperial Values: Respect, Collaboration, Excellence, Integrity, Innovation) to provide a foundational framework for approaches to using generative AI in teaching, learning and assessment university-wide. The principles cover promoting critical use of AI, adopting a consistent ethical approach, and building a proactive research community around AI in education.
21+Evidence (en, 8e2e65292a63): These principles are intended to provide a starting point for approaches to using generative AI in teaching, learning and assessment at Imperial. Imperial supports the use of the principles to frame and underpin activities university-wide as we as a community explore the use of generative AI, progress and develop policy, and establish guidelines. The principles are underpinned by Imperial's core Values. ... Promoting the critical use of generative AI in teaching, learning and assessment. Adopting a consistent ethical approach to the use of generative AI. Building a proactive research community around the use of generative AI.