London, United Kingdom

The London School of Economics and Political Science (LSE)

The London School of Economics and Political Science (LSE) is listed as QS 2026 rank 56. The London School of Economics and Political Science (LSE) has 8 source-backed AI policy claim records from 6 official source attributions. The public record preserves original-language evidence snippets, source URLs, snapshot hashes, confidence, and review state.

Short answer

v1 public contract

The London School of Economics and Political Science (LSE) is listed as QS 2026 rank 56. The London School of Economics and Political Science (LSE) has 8 source-backed AI policy claim records from 6 official source attributions. The public record preserves original-language evidence snippets, source URLs, snapshot hashes, confidence, and review state.

Policy statusReviewed evidence-backed recordReview: Agent reviewedEvidence-backed claims8Reviewed8Candidate0Official sources6

This reference record summarizes visible public data only. Official sources and original-language evidence remain canonical; confidence is separate from review state.

This page is not legal advice, not academic integrity advice, and not an official university statement unless a linked source is the university's own official page.

Policy profile

Deterministic source-backed dimensions derived from this record's public claims.

Coverage score75/100Coverage labelbroad public coverageReview: Machine candidateAnalysis confidence80%

Policy profile rows are machine-candidate derived metadata. They are not final policy conclusions; inspect the linked claim evidence before reuse.

Analysis page-quality metadata is available at /api/public/v1/analysis/page-quality.json.

Policy presence

The London School of Economics and Political Science (LSE) has 5 source-backed public claims for policy presence; deterministic analysis status: unclear.

UnclearMachine candidateConfidence80%Evidence5Sources4

AI disclosure

No source-backed public claim about AI disclosure or acknowledgement is present in this profile.

The current public tracker record does not contain claim evidence about disclosing, acknowledging, citing, or declaring AI use.

Not MentionedMachine candidateConfidence0%Evidence0Sources0

Coursework

The London School of Economics and Political Science (LSE) has 5 source-backed public claims for coursework; deterministic analysis status: restricted.

RestrictedMachine candidateConfidence79%Evidence5Sources4

Exams

The London School of Economics and Political Science (LSE) has 5 source-backed public claims for exams; deterministic analysis status: restricted.

RestrictedMachine candidateConfidence79%Evidence5Sources4

Privacy and data entry

The London School of Economics and Political Science (LSE) has 4 source-backed public claims for privacy and data entry; deterministic analysis status: restricted.

RestrictedMachine candidateConfidence79%Evidence4Sources4

Academic integrity

The London School of Economics and Political Science (LSE) has 1 source-backed public claim for academic integrity; deterministic analysis status: restricted.

RestrictedMachine candidateConfidence79%Evidence1Sources1

Approved tools

The London School of Economics and Political Science (LSE) has 4 source-backed public claims for approved tools; deterministic analysis status: restricted.

RestrictedMachine candidateConfidence79%Evidence4Sources3

Named AI services

The London School of Economics and Political Science (LSE) has 5 source-backed public claims for named ai services; deterministic analysis status: restricted.

RestrictedMachine candidateConfidence80%Evidence5Sources4

Teaching guidance

No source-backed public claim about teaching guidance is present in this profile.

The current public tracker record does not contain claim evidence about instructor, classroom, assessment-design, or syllabus guidance.

Not MentionedMachine candidateConfidence0%Evidence0Sources0

Research guidance

The London School of Economics and Political Science (LSE) has 2 source-backed public claims for research guidance; deterministic analysis status: restricted.

RestrictedMachine candidateConfidence80%Evidence2Sources1

Security and procurement

No source-backed public claim about AI security review or procurement is present in this profile.

The current public tracker record does not contain claim evidence about security review, procurement, vendor approval, risk assessment, authentication, SSO, or enterprise licensing.

Not MentionedMachine candidateConfidence0%Evidence0Sources0

Coverage score measures breadth of public, source-backed coverage only. It is not a policy quality, strictness, legal adequacy, safety, or compliance score.

Evidence-backed claims

8 reviewed evidence-backed public claim

Ai Tool Treatment

LSE states that making generative AI tools available does not endorse unrestricted use, and users should check their specific course, programme, or department policy.

Review: Agent reviewedConfidence95%

Normalized value: tools_available_not_unrestricted_use

Original evidence

Evidence 1
By making generative AI tools available, LSE is not endorsing the unrestricted use of generative AI tools by students and staff. You should always check your specific course and/or department policy on the use of generative AI tools.

Privacy

LSE research guidance tells researchers not to share personal, sensitive, or confidential data with third-party AI tools unless the tools meet LSE privacy and security standards, and it strongly encourages Microsoft Copilot for privacy and security.

Review: Agent reviewedConfidence95%

Normalized value: privacy_security_research_ai_tools

Original evidence

Evidence 1
Researchers should not share personal, sensitive, or confidential data with 3rd party tools that do not provide assurances of privacy and security that are satisfactory to LSE standards. Microsoft Copilot is available for staff and student use in the LSE, supported centrally, and is fully secure and private when logged in with an LSE email in the Edge browser.

Ai Tool Treatment

LSE requires departments or course convenors to classify authorised generative AI use in assessment as no authorised use, limited authorised use, or full authorised use, and to communicate the position to students.

Review: Agent reviewedConfidence94%

Normalized value: three_position_assessment_framework

Original evidence

Evidence 1
In advance of the start of the academic year, all academic departments must agree department-wide or course-level positions on the authorised use of generative AI in assessment within one of the following positions: Position 1: No authorised use of generative AI in assessment. Position 2: Limited authorised use of generative AI in assessment. Position 3: Full authorised use of generative AI in assessment.

Research

LSE research guidance applies to LSE staff and students undertaking research and frames generative AI as a supportive tool while keeping researchers accountable for outputs.

Review: Agent reviewedConfidence94%

Normalized value: research_guidance_applies_staff_students

Original evidence

Evidence 1
This guidance applies to all LSE staff and students undertaking research. Generative AI refers to AI systems that create new content, predominantly text but also images, audio and video, based on users’ natural language prompts. It should be thought of as a supportive tool or assistant, with researchers always in the driver’s seat and accountable for what they produce.

Privacy

LSE legal and regulatory guidance tells users not to put personal data, confidential or commercially sensitive data, or certain copyrighted content into external AI tools.

Review: Agent reviewedConfidence94%

Normalized value: external_ai_restricted_data_warning

Original evidence

Evidence 1
Don’t put personal data in an AI tool that is external to LSE. If it will swallow the personal data for further processing that LSE cannot control, you can’t use the AI tool. Don’t upload in-copyright content where you do not have the copyright holder’s permission and where the AI tool is known to use inputs as training data. Don’t put confidential or commercially sensitive data in an AI tool that is external to LSE.

Academic Integrity

For 2025/26, LSE requested departments to add assessment safeguards, including observed assessment methods, to help assure degree integrity and prevent unfair competitive advantage from generative AI.

Review: Agent reviewedConfidence93%

Normalized value: observed_assessment_safeguards

Original evidence

Evidence 1
For 2025/26 we request that departments introduce additional safeguards to strengthen the three-position framework agreed at Education Committee in May 2024 (prohibition; limited; and full use of GenAI). This involves combining observed assessment with thoughtful integration of generative AI technologies, tailored appropriately to each discipline at the programme or course level.

Ai Tool Treatment

LSE lists Claude for Education as available to academic staff and students and Copilot with Commercial Data Protection as available to all staff and students.

Review: Agent reviewedConfidence92%

Normalized value: claude_and_copilot_available

Original evidence

Evidence 1
Current tools available to LSE staff and students: Claude for Education: available to academic staff and students. Copilot (Copilot with Commercial Data Protection): available to all staff and students.

Ai Tool Treatment

LSE Claude for Education guidance says use is optional, student and staff data will not be used to train Anthropic models, and personal, operational, or confidential data must not be shared through Claude.

Review: Agent reviewedConfidence92%

Normalized value: claude_optional_training_and_data_limits

Original evidence

Evidence 1
Anthropic have guaranteed that LSE student and staff data will not be used to train their models. Use of Claude for Education is completely optional – there is no obligation to sign up. Personal, operational, or confidential data must not be shared via Claude, and Claude must not be integrated into operational workflows or processes.

Candidate claims

0 machine or needs-review claim

Candidate claims are not final policy conclusions. They preserve source URL, source snapshot hash, evidence, confidence, and review state so the record can be audited before review.

Official sources

6 source attribution

Change log

Source-check timeline and diff-style claim/evidence preview.

View the public change record for this university, including source snapshot hashes, claim review states, and a diff-style preview of current source-backed evidence.

Last checkedMay 12, 2026Last changedMay 12, 2026Open change log

Corrections and missing evidence

Corrections create review tasks and do not directly change this public record.

If an official source is missing, stale, moved, blocked, or incorrectly summarized, submit a source URL, policy change report, or institution correction for review. Corrections must preserve source URLs, source language, original evidence, review state, and audit history.

Back to universities