Paris, France

Université PSL

Université PSL is listed as QS 2026 rank 28. Université PSL has 5 source-backed AI policy claim records from 2 official source attributions. The public record preserves original-language evidence snippets, source URLs, snapshot hashes, confidence, and review state.

Short answer

v1 public contract

Université PSL is listed as QS 2026 rank 28. Université PSL has 5 source-backed AI policy claim records from 2 official source attributions. The public record preserves original-language evidence snippets, source URLs, snapshot hashes, confidence, and review state.

Policy statusReviewed evidence-backed recordReview: Agent reviewedEvidence-backed claims5Reviewed5Candidate0Official sources2

This reference record summarizes visible public data only. Official sources and original-language evidence remain canonical; confidence is separate from review state.

This page is not legal advice, not academic integrity advice, and not an official university statement unless a linked source is the university's own official page.

Policy profile

Deterministic source-backed dimensions derived from this record's public claims.

Coverage score90/100Coverage labelbroad public coverageReview: Machine candidateAnalysis confidence79%

Policy profile rows are machine-candidate derived metadata. They are not final policy conclusions; inspect the linked claim evidence before reuse.

Analysis page-quality metadata is available at /api/public/v1/analysis/page-quality.json.

Privacy and data entry

Université PSL has 1 source-backed public claim for privacy and data entry; deterministic analysis status: recommended.

RecommendedMachine candidateConfidence77%Evidence1Sources1

Approved tools

No source-backed public claim identifying approved or licensed AI tools is present in this profile.

The current public tracker record does not contain claim evidence that identifies institutionally approved, licensed, procured, or enterprise AI tools.

Not MentionedMachine candidateConfidence0%Evidence0Sources0

Teaching guidance

Université PSL has 1 source-backed public claim for teaching guidance; deterministic analysis status: recommended.

RecommendedMachine candidateConfidence82%Evidence1Sources1

Security and procurement

No source-backed public claim about AI security review or procurement is present in this profile.

The current public tracker record does not contain claim evidence about security review, procurement, vendor approval, risk assessment, authentication, SSO, or enterprise licensing.

Not MentionedMachine candidateConfidence0%Evidence0Sources0

Coverage score measures breadth of public, source-backed coverage only. It is not a policy quality, strictness, legal adequacy, safety, or compliance score.

Evidence-backed claims

5 reviewed evidence-backed public claim

Academic Integrity

Université PSL academic regulations state that, when an instructor authorizes the use of AI-based tools such as ChatGPT, that use must be explicitly disclosed like a source citation; failure to disclose AI use is considered plagiarism and sanctioned as such.

Review: Agent reviewedConfidence96%

Normalized value: authorized_ai_use_must_be_disclosed_or_plagiarism

Original evidence

Evidence 1
Lorsque celui-ci est autorisé par l’enseignante ou l’enseignant, le recours à des outils basés sur l’intelligence artificielle (IA), tels que ChatGPT, doit être signalé explicitement (de la même manière que toute référence ou citation de source). Ne pas signaler l’utilisation de l’intelligence artificielle est considéré comme un acte de plagiat et sera sanctionné comme tel.

Academic Integrity

PSL-affiliated guidance recommends researchers be transparent about AI use by indicating the tool used, its version, and the context of use, and notes that the prompt and result can also be documented.

Review: Agent reviewedConfidence92%

Normalized value: disclose_ai_tool_version_context_in_research

Original evidence

Evidence 1
Être transparent dans son utilisation de l’IA?: si une IA générative est utilisée, il faut indiquer l’outil utilisé, sa version et le cadre de cette utilisation. Le prompt et le résultat peuvent également être documentés. Il faut toutefois garder à l’esprit la nature aléatoire des IA, qui rend difficile la reproductibilité de la recherche.

Academic Integrity

PSL-affiliated guidance states that generative AI tools cannot be considered co-authors of scientific work, and users retain full responsibility for AI-generated content, codes, images, and texts.

Review: Agent reviewedConfidence91%

Normalized value: ai_not_coauthor_user_responsible

Original evidence

Evidence 1
Conserver la responsabilité des productions scientifiques issues des IA?: il faut garder une approche critique de ces outils, connaître leurs biais et leurs limites (hallucinations, approximation), ne pas les considérer comme des co-auteurs et ne pas les utiliser pour créer des matériaux scientifiques, qui pourraient être falsifiés.

Privacy

PSL-affiliated guidance advises that personal data should not be shared with third parties when generative AI is used, and that generated text should be checked so it does not constitute plagiarism or contain personal data.

Review: Agent reviewedConfidence91%

Normalized value: do_not_share_personal_data_check_ai_output_for_plagiarism_or_personal_data

Original evidence

Evidence 1
Outre les questions de confidentialité des productions scientifiques, les données personnelles n’ont pas vocation à être partagées avec des tiers. Dans le cadre des réponses obtenues, il faut vérifier que le texte généré ne constitue pas un plagiat ou ne contient pas de données personnelles.

Other

PSL-affiliated guidance notes that generative AI can undermine research reproducibility because its codes and algorithms may be unknown and change regularly, and because probabilistic generation can produce variable responses.

Review: Agent reviewedConfidence91%

Normalized value: generative_ai_can_undermine_research_reproducibility

Original evidence

Evidence 1
La reproductibilité de la recherche est un principe de base de l’intégrité scientifique. Or ce principe est mis à mal par l’utilisation de l’IA. D’une part, les IA génératives utilisent parfois des codes et des algorithmes qui ne sont pas toujours connus et qui connaissent des changements réguliers. Dans ces cas-là, il est impossible de documenter sa recherche pour permettre de la reproduire. D’autre part, au-delà de cette opacité, le caractère probabiliste de la génération conduit à une variabilité des réponses, qui peuvent être différentes d’une interrogation reconduite à différents moments.

Candidate claims

0 machine or needs-review claim

Candidate claims are not final policy conclusions. They preserve source URL, source snapshot hash, evidence, confidence, and review state so the record can be audited before review.

Official sources

2 source attribution

Utiliser l'intelligence artificielle dans sa recherche : des recommandations pour le respect de l'intégrité scientifique

bu.dauphine.psl.eu

Snapshot hash
c8d14b8e1a715d3bbc3bb0cce9e0ed8062cad25acf9c8ee9d5570160651bfb9d

Change log

Source-check timeline and diff-style claim/evidence preview.

View the public change record for this university, including source snapshot hashes, claim review states, and a diff-style preview of current source-backed evidence.

Last checkedMay 11, 2026Last changedMay 11, 2026Open change log

Corrections and missing evidence

Corrections create review tasks and do not directly change this public record.

If an official source is missing, stale, moved, blocked, or incorrectly summarized, submit a source URL, policy change report, or institution correction for review. Corrections must preserve source URLs, source language, original evidence, review state, and audit history.

Back to universities