Santiago, Chile

Universidad de Chile

Universidad de Chile is listed as QS 2026 rank =173. Universidad de Chile has 5 source-backed AI policy claim records from 2 official source attributions. The public record preserves original-language evidence snippets, source URLs, snapshot hashes, confidence, and review state.

Short answer

v1 public contract

Universidad de Chile is listed as QS 2026 rank =173. Universidad de Chile has 5 source-backed AI policy claim records from 2 official source attributions. The public record preserves original-language evidence snippets, source URLs, snapshot hashes, confidence, and review state.

Citation-ready summary

As of this public record, University AI Policy Tracker lists Universidad de Chile as an agent-reviewed AI policy record last checked on May 15, 2026 and last changed on May 15, 2026. The record contains 5 source-backed claims, including 5 reviewed claims, from 2 official source attributions. Original-language evidence snippets and source URLs remain canonical, with public JSON available at https://eduaipolicy.org/api/public/v1/universities/universidad-de-chile.json. The entity-level confidence is 94%. This tracker is not legal advice, not academic integrity advice, and not an official university statement unless the linked source is the university's own official page.

Claim coverage5 reviewedSource languageesPublic JSON/api/public/v1/universities/universidad-de-chile.json

Policy signals in this record

  • Evidence includes Source status claims.
  • Evidence includes AI tool treatment claims.
  • Evidence includes Research claims.
  • Evidence includes Teaching claims.
  • Evidence includes Privacy claims.
  • No specific AI service name is highlighted by the current public claim text.
  • Disclosure, acknowledgment, citation, or attribution language appears in the public claim text.
  • Privacy, sensitive-data, or security language appears in the public claim text.
Policy statusReviewed evidence-backed recordReview: Agent reviewedEvidence-backed claims5Reviewed5Candidate0Official sources2

This reference record summarizes visible public data only. Official sources and original-language evidence remain canonical; confidence is separate from review state.

This page is not legal advice, not academic integrity advice, and not an official university statement unless a linked source is the university's own official page.

Policy profile

Deterministic source-backed dimensions derived from this record's public claims.

Coverage score100/100Coverage labelbroad public coverageReview: Machine candidateAnalysis confidence78%

Policy profile rows are machine-candidate derived metadata. They are not final policy conclusions; inspect the linked claim evidence before reuse.

Analysis page-quality metadata is available at /api/public/v1/analysis/page-quality.json.

Privacy and data entry

Universidad de Chile has 1 source-backed public claim for privacy and data entry; deterministic analysis status: restricted.

RestrictedMachine candidateConfidence77%Evidence1Sources1

Approved tools

Universidad de Chile has 1 source-backed public claim for approved tools; deterministic analysis status: recommended.

RecommendedMachine candidateConfidence79%Evidence1Sources1

Named AI services

Universidad de Chile has 2 source-backed public claims for named ai services; deterministic analysis status: restricted.

RestrictedMachine candidateConfidence78%Evidence2Sources2

Research guidance

Universidad de Chile has 2 source-backed public claims for research guidance; deterministic analysis status: recommended.

RecommendedMachine candidateConfidence79%Evidence2Sources1

Security and procurement

No source-backed public claim about AI security review or procurement is present in this profile.

The current public tracker record does not contain claim evidence about security review, procurement, vendor approval, risk assessment, authentication, SSO, or enterprise licensing.

Not MentionedMachine candidateConfidence0%Evidence0Sources0

Coverage score measures breadth of public, source-backed coverage only. It is not a policy quality, strictness, legal adequacy, safety, or compliance score.

Evidence-backed claims

5 reviewed evidence-backed public claim

Source Status

The Faculty of Medicine's Humanizar la Inteligencia document describes itself as an orienting guide and not as binding regulation.

Review: Agent reviewedConfidence94%

Normalized value: famed_ai_guide_orienting_not_binding_regulation

Original evidence

Evidence 1
Se trata de un documento de trabajo, en construcción y sujeto a revisión permanente, concebido como una guía orientadora y no como una normativa vinculante.

Localized display only

The document describes itself as a work in progress and orienting guide, not binding regulation.

Ai Tool Treatment

The Faculty of Medicine guide proposes five principles for AI initiatives in that faculty: transparency and traceability, human supervision and non-delegation of critical judgment, equity and technological justice, academic integrity and responsible authorship, and participatory governance with continuous updating.

Review: Agent reviewedConfidence93%

Normalized value: famed_ai_principles_transparency_human_supervision_equity_integrity_governance

Original evidence

Evidence 1
se proponen cinco principios fundamentales... transparencia y trazabilidad... supervisión humana y no delegación del juicio crítico... equidad y justicia tecnológica... integridad académica y autoría responsable... gobernanza participativa y actualización continua

Localized display only

The guide proposes five principles: transparency and traceability, human supervision, equity, academic integrity and responsible authorship, and participatory governance with continuous updating.

Research

For health research publications, the Faculty of Medicine guide suggests declaring the type, timing and purpose of AI use, never attributing authorship to an AI tool, and validating AI-assisted production through human authors.

Review: Agent reviewedConfidence92%

Normalized value: famed_research_ai_declaration_no_ai_authorship_human_validation

Original evidence

Evidence 1
TRANSPARENCIA EN EL USO: declarar de forma explícita el tipo, el momento y el propósito del uso de herramientas de IA... DISTINCIÓN ENTRE AUTORÍA Y ASISTENCIA: nunca atribuir autoría a una herramienta de IA... SUPERVISIÓN HUMANA: toda producción asistida por IA debe ser validada por los autores

Localized display only

For research publications, the guide recommends explicit AI-use declaration, no AI authorship, and validation of AI-assisted production by human authors.

Teaching

The FCFM guidance recommends that course teaching teams build a transparency policy with students around possible generative AI uses, including defining when and how the AI tool used should be cited.

Review: Agent reviewedConfidence91%

Normalized value: fcfm_course_transparency_policy_and_ai_citation_recommended

Original evidence

Evidence 1
Construir una política de transparencia al interior del curso, en la cual se consensúen los posibles usos junto a los y las estudiantes. Por ejemplo, definir cuándo y cómo procede la citación de la herramienta IA utilizada.

Localized display only

Build a transparency policy inside the course with students about possible AI uses, including when and how to cite the AI tool used.

Privacy

The FCFM guidance warns teaching teams that inadequate use of generative AI can create ethics risks such as plagiarism and copyright issues, and security/privacy risks such as insufficient data protection or data use without consent.

Review: Agent reviewedConfidence90%

Normalized value: fcfm_ai_risk_guidance_plagiarism_privacy_data_protection

Original evidence

Evidence 1
Ética. En este aspecto, usos inadecuados de estas herramientas podrían facilitar acciones relacionadas con el plagio, violaciones de derechos de autor y propiedad intelectual... Seguridad y privacidad. Algunos riesgos a tener en consideración son la protección insuficiente de datos... uso de datos sin consentimiento...

Localized display only

The guidance lists ethics risks such as plagiarism and copyright issues, plus security and privacy risks including insufficient data protection and data use without consent.

Candidate claims

0 machine or needs-review claim

Candidate claims are not final policy conclusions. They preserve source URL, source snapshot hash, evidence, confidence, and review state so the record can be audited before review.

Official sources

2 source attribution

Change log

Source-check timeline and diff-style claim/evidence preview.

View the public change record for this university, including source snapshot hashes, claim review states, and a diff-style preview of current source-backed evidence.

Last checkedMay 15, 2026Last changedMay 15, 2026Open change log

Corrections and missing evidence

Corrections create review tasks and do not directly change this public record.

If an official source is missing, stale, moved, blocked, or incorrectly summarized, submit a source URL, policy change report, or institution correction for review. Corrections must preserve source URLs, source language, original evidence, review state, and audit history.

Back to universities