Change log

The London School of Economics and Political Science (LSE)

Source-check timeline, source snapshot hashes, claim review state, and a diff-style preview of current source-backed claim evidence.

Change summary

Current public record freshness and review state.

The London School of Economics and Political Science (LSE) currently has 8 source-backed claim records and 6 official source attributions. Latest tracked changed date: May 12, 2026.

This tracker is not legal advice, not academic integrity advice, and not an official university statement unless a linked source is the university's own official page.

Claim/evidence diff preview

Diff-style preview built from current public claim/evidence records. Full old/new source diffs require paired historical snapshots.

The London School of Economics and Political Science (LSE) current policy evidence

Inserted lines represent current public claim and evidence records in the source-backed dataset.

+16-0
11 # The London School of Economics and Political Science (LSE) AI policy record
2+ai_tool_treatment: LSE states that making generative AI tools available does not endorse unrestricted use, and users should check their specific course, programme, or department policy.
3+Evidence (en, 238a4d20ef5a): By making generative AI tools available, LSE is not endorsing the unrestricted use of generative AI tools by students and staff. You should always check your specific course and/or department policy on the use of generative AI tools.
4+privacy: LSE research guidance tells researchers not to share personal, sensitive, or confidential data with third-party AI tools unless the tools meet LSE privacy and security standards, and it strongly encourages Microsoft Copilot for privacy and security.
5+Evidence (en, 61a62e350898): Researchers should not share personal, sensitive, or confidential data with 3rd party tools that do not provide assurances of privacy and security that are satisfactory to LSE standards. Microsoft Copilot is available for staff and student use in the LSE, supported centrally, and is fully secure and private when logged in with an LSE email in the Edge browser.
6+ai_tool_treatment: LSE requires departments or course convenors to classify authorised generative AI use in assessment as no authorised use, limited authorised use, or full authorised use, and to communicate the position to students.
7+Evidence (en, b9ab03932e41): In advance of the start of the academic year, all academic departments must agree department-wide or course-level positions on the authorised use of generative AI in assessment within one of the following positions: Position 1: No authorised use of generative AI in assessment. Position 2: Limited authorised use of generative AI in assessment. Position 3: Full authorised use of generative AI in assessment.
8+research: LSE research guidance applies to LSE staff and students undertaking research and frames generative AI as a supportive tool while keeping researchers accountable for outputs.
9+Evidence (en, 61a62e350898): This guidance applies to all LSE staff and students undertaking research. Generative AI refers to AI systems that create new content, predominantly text but also images, audio and video, based on users’ natural language prompts. It should be thought of as a supportive tool or assistant, with researchers always in the driver’s seat and accountable for what they produce.
10+privacy: LSE legal and regulatory guidance tells users not to put personal data, confidential or commercially sensitive data, or certain copyrighted content into external AI tools.
11+Evidence (en, 21360bfa51bb): Don’t put personal data in an AI tool that is external to LSE. If it will swallow the personal data for further processing that LSE cannot control, you can’t use the AI tool. Don’t upload in-copyright content where you do not have the copyright holder’s permission and where the AI tool is known to use inputs as training data. Don’t put confidential or commercially sensitive data in an AI tool that is external to LSE.
12+academic_integrity: For 2025/26, LSE requested departments to add assessment safeguards, including observed assessment methods, to help assure degree integrity and prevent unfair competitive advantage from generative AI.
13+Evidence (en, 19740e21dc7d): For 2025/26 we request that departments introduce additional safeguards to strengthen the three-position framework agreed at Education Committee in May 2024 (prohibition; limited; and full use of GenAI). This involves combining observed assessment with thoughtful integration of generative AI technologies, tailored appropriately to each discipline at the programme or course level.
14+ai_tool_treatment: LSE lists Claude for Education as available to academic staff and students and Copilot with Commercial Data Protection as available to all staff and students.
15+Evidence (en, 238a4d20ef5a): Current tools available to LSE staff and students: Claude for Education: available to academic staff and students. Copilot (Copilot with Commercial Data Protection): available to all staff and students.
16+ai_tool_treatment: LSE Claude for Education guidance says use is optional, student and staff data will not be used to train Anthropic models, and personal, operational, or confidential data must not be shared through Claude.
17+Evidence (en, 6c73b0ef91ea): Anthropic have guaranteed that LSE student and staff data will not be used to train their models. Use of Claude for Education is completely optional – there is no obligation to sign up. Personal, operational, or confidential data must not be shared via Claude, and Claude must not be integrated into operational workflows or processes.

Claim changes

8 claim records

ai_tool_treatment

LSE states that making generative AI tools available does not endorse unrestricted use, and users should check their specific course, programme, or department policy.

Review: Agent reviewedConfidence95%Evidence1Languagesen

privacy

LSE research guidance tells researchers not to share personal, sensitive, or confidential data with third-party AI tools unless the tools meet LSE privacy and security standards, and it strongly encourages Microsoft Copilot for privacy and security.

Review: Agent reviewedConfidence95%Evidence1Languagesen

ai_tool_treatment

LSE requires departments or course convenors to classify authorised generative AI use in assessment as no authorised use, limited authorised use, or full authorised use, and to communicate the position to students.

Review: Agent reviewedConfidence94%Evidence1Languagesen

research

LSE research guidance applies to LSE staff and students undertaking research and frames generative AI as a supportive tool while keeping researchers accountable for outputs.

Review: Agent reviewedConfidence94%Evidence1Languagesen

privacy

LSE legal and regulatory guidance tells users not to put personal data, confidential or commercially sensitive data, or certain copyrighted content into external AI tools.

Review: Agent reviewedConfidence94%Evidence1Languagesen

academic_integrity

For 2025/26, LSE requested departments to add assessment safeguards, including observed assessment methods, to help assure degree integrity and prevent unfair competitive advantage from generative AI.

Review: Agent reviewedConfidence93%Evidence1Languagesen

ai_tool_treatment

LSE lists Claude for Education as available to academic staff and students and Copilot with Commercial Data Protection as available to all staff and students.

Review: Agent reviewedConfidence92%Evidence1Languagesen

ai_tool_treatment

LSE Claude for Education guidance says use is optional, student and staff data will not be used to train Anthropic models, and personal, operational, or confidential data must not be shared through Claude.

Review: Agent reviewedConfidence92%Evidence1Languagesen

Source snapshots

6 source attributions