Notre Dame, United States

University of Notre Dame

University of Notre Dame is listed as QS 2026 rank =294. University of Notre Dame has 4 source-backed AI policy claim records from 4 official source attributions. The public record preserves original-language evidence snippets, source URLs, snapshot hashes, confidence, and review state.

Short answer

v1 public contract

University of Notre Dame is listed as QS 2026 rank =294. University of Notre Dame has 4 source-backed AI policy claim records from 4 official source attributions. The public record preserves original-language evidence snippets, source URLs, snapshot hashes, confidence, and review state.

Citation-ready summary

As of this public record, University AI Policy Tracker lists University of Notre Dame as an agent-reviewed AI policy record last checked on May 16, 2026 and last changed on May 16, 2026. The record contains 4 source-backed claims, including 4 reviewed claims, from 4 official source attributions. Original-language evidence snippets and source URLs remain canonical, with public JSON available at https://eduaipolicy.org/api/public/v1/universities/university-of-notre-dame.json. The entity-level confidence is 96%. This tracker is not legal advice, not academic integrity advice, and not an official university statement unless the linked source is the university's own official page.

Claim coverage4 reviewedSource languageenPublic JSON/api/public/v1/universities/university-of-notre-dame.json

Policy signals in this record

  • Evidence includes Academic integrity claims.
  • Evidence includes Privacy claims.
  • Evidence includes Teaching claims.
  • Named AI services detected in public claims: ChatGPT.
  • Teaching, assessment, coursework, or syllabus-related language appears in the public claim text.
  • Privacy, sensitive-data, or security language appears in the public claim text.
Policy statusReviewed evidence-backed recordReview: Agent reviewedEvidence-backed claims4Reviewed4Candidate0Official sources4

This reference record summarizes visible public data only. Official sources and original-language evidence remain canonical; confidence is separate from review state.

This page is not legal advice, not academic integrity advice, and not an official university statement unless a linked source is the university's own official page.

Policy profile

Deterministic source-backed dimensions derived from this record's public claims.

Coverage score75/100Coverage labelbroad public coverageReview: Machine candidateAnalysis confidence80%

Policy profile rows are machine-candidate derived metadata. They are not final policy conclusions; inspect the linked claim evidence before reuse.

Analysis page-quality metadata is available at /api/public/v1/analysis/page-quality.json.

AI disclosure

No source-backed public claim about AI disclosure or acknowledgement is present in this profile.

The current public tracker record does not contain claim evidence about disclosing, acknowledging, citing, or declaring AI use.

Not MentionedMachine candidateConfidence0%Evidence0Sources0

Approved tools

No source-backed public claim identifying approved or licensed AI tools is present in this profile.

The current public tracker record does not contain claim evidence that identifies institutionally approved, licensed, procured, or enterprise AI tools.

Not MentionedMachine candidateConfidence0%Evidence0Sources0

Research guidance

No source-backed public claim about research AI use is present in this profile.

The current public tracker record does not contain claim evidence about research use, publication ethics, research data, grants, or human-subjects compliance.

Not MentionedMachine candidateConfidence0%Evidence0Sources0

Security and procurement

No source-backed public claim about AI security review or procurement is present in this profile.

The current public tracker record does not contain claim evidence about security review, procurement, vendor approval, risk assessment, authentication, SSO, or enterprise licensing.

Not MentionedMachine candidateConfidence0%Evidence0Sources0

Coverage score measures breadth of public, source-backed coverage only. It is not a policy quality, strictness, legal adequacy, safety, or compliance score.

Evidence-backed claims

4 reviewed evidence-backed public claim

Academic Integrity

Notre Dame's student generative AI policy treats representing AI-generated or materially AI-modified work as one's own as academic dishonesty, and treats generative AI use that violates an instructor's stated policy or is not expressly permitted for coursework as an Honor Code violation.

Review: Agent reviewedConfidence96%

Normalized value: unauthorized_ai_use_honor_code_violation

Original evidence

Evidence 1
With this in mind, remember that representing work that you did not produce as your own, including work generated or materially modified by AI, constitutes academic dishonesty. Use of generative AI in a way that violates an instructor's articulated policy, or using it to complete coursework in a way not expressly permitted by the faculty member, will be considered a violation of the Honor Code.

Privacy

Notre Dame's AI@ND guidance says faculty, students, and staff may use University information with generative AI tools only when the information is public or the AI tool or service has undergone appropriate internal review and protective contract terms are in place.

Review: Agent reviewedConfidence94%

Normalized value: university_information_requires_public_or_reviewed_tool

Original evidence

Evidence 1
Faculty, students, and staff may use University information with generative AI tools or services only when: The information is classified as public, or The AI tool or service being used has undergone appropriate internal reviews and contract terms are in place to protect university data assets.

Privacy

Notre Dame guidance to faculty and staff says not to use AI tools with sensitive or confidential data and not to use AI tools with University data without a contract.

Review: Agent reviewedConfidence93%

Normalized value: no_sensitive_confidential_or_uncontracted_university_data_in_ai_tools

Original evidence

Evidence 1
Data Sensitivity: Do not use AI tools with sensitive or confidential data. Collaboration: Work closely with our campus IT and data security teams when considering using AI tools. Do not use AI tools with University data without a contract.

Teaching

Notre Dame guidance asks instructors to be explicit with students about expectations for ChatGPT and related AI tools in assignments, exams, and class, and says unauthorized generative AI use will be considered an Honor Code violation.

Review: Agent reviewedConfidence91%

Normalized value: instructors_should_state_ai_expectations

Original evidence

Evidence 1
Be explicit with students about your expectations regarding the use of ChatGPT and related AI tools for assignments, exams, and in the classroom, and be clear that engaging in unauthorized use of generative AI will be considered a violation of the Honor Code.

Candidate claims

0 machine or needs-review claim

Candidate claims are not final policy conclusions. They preserve source URL, source snapshot hash, evidence, confidence, and review state so the record can be audited before review.

Official sources

4 source attribution

Change log

Source-check timeline and diff-style claim/evidence preview.

View the public change record for this university, including source snapshot hashes, claim review states, and a diff-style preview of current source-backed evidence.

Last checkedMay 16, 2026Last changedMay 16, 2026Open change log

Corrections and missing evidence

Corrections create review tasks and do not directly change this public record.

If an official source is missing, stale, moved, blocked, or incorrectly summarized, submit a source URL, policy change report, or institution correction for review. Corrections must preserve source URLs, source language, original evidence, review state, and audit history.

Back to universities