Marseille, France

Aix-Marseille University

Aix-Marseille University is listed as QS 2026 rank =428. Aix-Marseille University has 4 source-backed AI policy claim records from 2 official source attributions. The public record preserves original-language evidence snippets, source URLs, snapshot hashes, confidence, and review state.

Short answer

v1 public contract

Aix-Marseille University is listed as QS 2026 rank =428. Aix-Marseille University has 4 source-backed AI policy claim records from 2 official source attributions. The public record preserves original-language evidence snippets, source URLs, snapshot hashes, confidence, and review state.

Citation-ready summary

As of this public record, University AI Policy Tracker lists Aix-Marseille University as an agent-reviewed AI policy record last checked on May 16, 2026 and last changed on May 16, 2026. The record contains 4 source-backed claims, including 4 reviewed claims, from 2 official source attributions. Original-language evidence snippets and source URLs remain canonical, with public JSON available at https://eduaipolicy.org/api/public/v1/universities/aix-marseille-university.json. The entity-level confidence is 86%. This tracker is not legal advice, not academic integrity advice, and not an official university statement unless the linked source is the university's own official page.

Claim coverage4 reviewedSource languagefrPublic JSON/api/public/v1/universities/aix-marseille-university.json

Policy signals in this record

  • Evidence includes Academic integrity claims.
  • Evidence includes AI tool treatment claims.
  • Evidence includes Teaching claims.
  • Evidence includes Privacy claims.
  • Named AI services detected in public claims: ChatGPT.
  • Disclosure, acknowledgment, citation, or attribution language appears in the public claim text.
Policy statusReviewed evidence-backed recordReview: Agent reviewedEvidence-backed claims4Reviewed4Candidate0Official sources2

This reference record summarizes visible public data only. Official sources and original-language evidence remain canonical; confidence is separate from review state.

This page is not legal advice, not academic integrity advice, and not an official university statement unless a linked source is the university's own official page.

Policy profile

Deterministic source-backed dimensions derived from this record's public claims.

Coverage score100/100Coverage labelbroad public coverageReview: Machine candidateAnalysis confidence71%

Policy profile rows are machine-candidate derived metadata. They are not final policy conclusions; inspect the linked claim evidence before reuse.

Analysis page-quality metadata is available at /api/public/v1/analysis/page-quality.json.

Research guidance

No source-backed public claim about research AI use is present in this profile.

The current public tracker record does not contain claim evidence about research use, publication ethics, research data, grants, or human-subjects compliance.

Not MentionedMachine candidateConfidence0%Evidence0Sources0

Security and procurement

No source-backed public claim about AI security review or procurement is present in this profile.

The current public tracker record does not contain claim evidence about security review, procurement, vendor approval, risk assessment, authentication, SSO, or enterprise licensing.

Not MentionedMachine candidateConfidence0%Evidence0Sources0

Coverage score measures breadth of public, source-backed coverage only. It is not a policy quality, strictness, legal adequacy, safety, or compliance score.

Evidence-backed claims

4 reviewed evidence-backed public claim

Academic Integrity

Aix-Marseille University is reported by an AMU-hosted ObsiaFormation page as having integrated a 2024/2025 M3C note treating student use of AI tools such as ChatGPT in evaluated personal or group work as fraud unless expressly authorized.

Review: Agent reviewedConfidence86%

Normalized value: AI use in evaluated work is treated as fraud unless expressly authorized, per reported AMU 2024/2025 M3C note.

Original evidence

Evidence 1
Suite à une réflexion et validation en CFVU, AMU a intégré une mention dans ses documents de cadrage des Modalités de contrôle des connaissances et des compétences (M3C) pour l’année universitaire 2024/2025 : « L’utilisation par les étudiants d’outils d’intelligence artificielle (comme ChatGPT ou autre) lors de la production de travaux personnels ou de groupe de toute nature, susceptible de faire l’objet d’une évaluation, est considérée comme une fraude passible de poursuites disciplinaires, à moins qu’elle ne soit expressément autorisée.

Localized display only

After CFVU validation, AMU integrated a 2024/2025 M3C note treating student AI use in evaluated work as fraud unless expressly authorized.

Ai Tool Treatment

For AI use that is expressly authorized in evaluated student work, the AMU-hosted M3C note says the use should be explicitly mentioned like any borrowing or citation from an external source.

Review: Agent reviewedConfidence84%

Normalized value: Authorized AI use in evaluated work should be explicitly mentioned.

Original evidence

Evidence 1
Dans ce cas, elle devra être explicitement mentionnée, comme n’importe quel emprunt ou citation d’une source externe.

Localized display only

When AI use is authorized, the note says it should be explicitly mentioned like any borrowing or citation from an external source.

Teaching

Aix-Marseille University's ObsiaFormation guide describes the observatory as supporting appropriation of AI uses in teaching and learning.

Review: Agent reviewedConfidence83%

Normalized value: AMU ObsiaFormation supports AI use in teaching and learning.

Original evidence

Evidence 1
L’Observatoire des usages et laboratoire de pratiques de l’IA en formation a pour vocation d’accompagner dans l’appropriation des usages de l’Intelligence Artificielle en enseignement et apprentissage.

Localized display only

ObsiaFormation states that its role is to support appropriation of AI uses in teaching and learning.

Privacy

The AMU-hosted ObsiaFormation guide frames responsible educational AI use as including verification of tool reliability and attention to ethics, including data protection.

Review: Agent reviewedConfidence79%

Normalized value: ObsiaFormation guide advises reliability checks and ethics/data-protection attention for educational AI use.

Original evidence

Evidence 1
Parmi les bonnes pratiques : Ne pas surcharger les étudiants avec trop d’automatisation, privilégier une approche équilibrée. Vérifier la pertinence et la fiabilité des outils et de leurs contributions avant de les intégrer. Respecter les principes d’éthique, notamment en matière de protection des données.

Localized display only

The guide lists good practices including balanced use, checking tool reliability and outputs, and respecting ethics including data protection.

Candidate claims

0 machine or needs-review claim

Candidate claims are not final policy conclusions. They preserve source URL, source snapshot hash, evidence, confidence, and review state so the record can be audited before review.

Official sources

2 source attribution

Change log

Source-check timeline and diff-style claim/evidence preview.

View the public change record for this university, including source snapshot hashes, claim review states, and a diff-style preview of current source-backed evidence.

Last checkedMay 16, 2026Last changedMay 16, 2026Open change log

Corrections and missing evidence

Corrections create review tasks and do not directly change this public record.

If an official source is missing, stale, moved, blocked, or incorrectly summarized, submit a source URL, policy change report, or institution correction for review. Corrections must preserve source URLs, source language, original evidence, review state, and audit history.

Back to universities